Node's EventEmitter implementation by default throws an error if an 'error' event is emitted, but no handlers are there to listen for it. This is both great and awful. If the default was for it to do nothing (as with normal events) you would miss that important errors are happening. However it is possible for unimportant things that never come up during development to cause hard errors in production, like wonky network traffic.
One problem with this is that most of the core apis are eventEmitters, or streams (which themselves are eventEmitters).
var http = require('http') var fs = require('fs') http.get('http://nrn.io:2222', function (res) { res.pipe(fs.createWriteStream('./na/foo')) })
In theory this is fine, except that I don't have any services running on port 2222, so I get an `ECONNREFUSED` error that crashes my process. Where does that error come from and how do we catch it though? The first instinct is to wrap the call in a try catch.
var http = require('http') var fs = require('fs') // broken try { http.get('http://nrn.io:2222', function (res) { res.pipe(fs.createWriteStream('./na/foo')) }) } catch (e) { console.log(e) }
This is still broken, because the error happens asynchronously in the future, we don't know that our connection has been refused until an undetermined time after our code has executed. It turns out http.get returns us an eventEmitter that we can get updates on, and that is where the ECONNREFUSED error is getting emitted.
var http = require('http') var fs = require('fs') http.get('http://nrn.io:2222', function (res) { res.pipe(fs.createWriteStream('./na/foo')) }).on('error', function (e) { console.error(e) })
Now we're just logging out the error instead of crashing the process. But what if we fix the call to point at something that exists, so we can see how these error events play out in the streams inside?
We get an ENOENT, because we are trying to write to a folder/file that doesn't exist. We already know this happens in the future from the time the code executes, so lets skip the try catch, and listen to the error event.
var http = require('http') var fs = require('fs') http.get('http://nrn.io', function (res) { res.pipe(fs.createWriteStream('./na/foo') ).on('error', function (e) { console.error(e) }) }).on('error', function (e) { console.error(e) })
Well, that worked first time, because the pipe method happens to return the thing being piped to, not the thing being piped from. But it misses the more sinister thing here. Errors aren't propagated through streams that are piped together. So only the write stream's errors are handled here, if our response has a problem it will still throw a hard error and crash the process.
Listen for all errors that you know can happen, and can gracefully clean up from. More importantly, plan for your process to crash. If you add blanket error handlers that swallow errors they don't understand you can leave your process in a state that you have never tested, and is probably broken. You're better off writing it in a way that crashing isn't such a big deal, and when these unexpected things come up, crash, collect the details you need to fix it in a future version, and spin up a new process.