A bit of history
In the beginning, there was only arithmetic error handling. Programs were reasonably small, didn't have opaque dependencies and most importantly, they only took input data that was relied on to be correct. As there were no side effects, incorrect inputs would produce arbitrary outputs, and that was okay because the output wasn't usable in that case, anyway.
Going forward a few years, we had multi-user systems, multiple programs running on the same machine and we had (comparatively) higher level languages like C, combined with a library for a standard interface with the system. Complexity was several orders of magnitude greater, so there also were heaps of things that could go wrong. As a consequence, we started to use memory protection, so one location in a program can't arbitrarily corrupt another one, or even worse, other programs including the operating system. Also, we started to use early forms of error handling.
Try-Catch
This form of error handling evolved organically over the years, but the basic idea stayed the same:
In a block of code, encountering an error leads to the program runtime searching for a handler of that exception type outside of that code block. Modern implementations include stack unwinding, so when a function does not have a handler, it will throw the error itself to go further through the stack, until a handler is found. If no handler is found, the thread will crash.
Because you are leaving the structured control flow, the usually used keywords for this form of error handling are try
for the fallible block, throw
for an escalating exception and catch
for the handler of it.
Return values
Parallel to the development of exception handling using try
and catch
, C was developed. The language implemented error handling by either returning an error code from a failed function or setting a global error variable.
The error code are integers and mostly arbitrary, so you would have to look up some documentation to know which code corresponds to which error. But it doesn't end here.
There's quite some confusion on how to map errors to values in the case of a function return. In C, you can only return one data value, so errors have to map into the same space as valid values.
As the range of valid values is dependent on the function, the error range is also dependent on it. Additionally, programmers couldn't agree on a universal standard back then. As a consequence, an error is always 0
for some functions, for some it's -1
, for some it's a positive error code, for some it's a negative one, or it's some arbitrary special value.
This is very messy, and it's also easy to just forget to handle an error, especially in the case of checking a global variable.
Unix implemented the same mechanism on the program level and others followed, so nowadays your program can return an integer to signal success or failure.
It is almost universally agreed on that 0
is the success value, while you can mostly just throw and other number on an error because there isn't too much standardization there.
Callbacks
Then there's callbacks. I didn't find too much about the history of using callbacks for error handling, but as modern languages such as JS are the main ones using it, I will regard that as more modern. Using this mechanism, you give a function a parameter in form of a callback function. This function will be called in the appropriate case, in out case when encountering an error condition.
I believe the idea is that since error handling disrupts the flow of code handling the success case, callbacks can be used to move the error handling code somewhere else. However, this disregards the error case as being much less important than the success case, which it isn't in my opinion. Also, callbacks generally tend to obfuscate control flow.