One of the easiest things to do in most languages is executing steps serially in sequence. In most languages, but not JavaScript. In JavaScript the way people first learn to write this is as an ever deepening series of anonymous callbacks, it is one of the leading causes of "callbackhell".
var fs = require('fs') var https = require('https') var streamCb = require('stream-cb') https.get('https://github.com/nrn.keys', function (response) { response.pipe(streamCb(function (e, body) { if (e) throw e fs.writeFile('./example-keys', body, function (aargh) { if (aargh) throw aargh console.log('are we there yet?') }) })) })
Now I'm going to have to ask for a little bit of willful suspension of disbelief here for a minute, because for some reason I couldn't contrive an example that wouldn't be super easy with streams. So imagine your streams brain is broken. Or these are APIs that don't support streaming. Or maybe we just wanna use callbacks for example-reason.
There are many problems with writing code this way. It's not so bad to write, but it's very inflexible and hard to refactor. It ends up with a lot of redundancy. And each inner function ends up with more and more state it has access to, so they become increasingly hard to read and understand.
First we start pulling out these processes that we want to do serially, and instead of putting them in anonymous callbacks in line, we name them. There is a lot of power in a name. It allows us to communicate about, reuse, and better understand what we are trying to do.
var fs = require('fs') var https = require('https') var streamCb = require('stream-cb') https.get('https://github.com/nrn.keys', poolResponse(writeKeys(logSnark))) function poolResponse (cb) { return function (resp) { resp.pipe(streamCb(cb)) } } function writeKeys (cb) { return function (err, body) { if (err) return cb(err) fs.writeFile('./example-keys', body, cb) } } function logSnark (err) { // All errors in this series end up here. if (err) throw err console.log('are we there yet?') }
This still has some unnecessary boiler plate in each function. It means each step has to know about setting up the chain of steps, and forwarding errors through to the end.
var fs = require('fs') var https = require('https') var streamCb = require('stream-cb') https.get('https://github.com/nrn.keys', function (stream) { // kicking this off is weird since https.get // doesn't follow the (err, result) pattern. serial( [ poolResponse , writeKeys ], logSnark )(null, stream) } ) function poolResponse (done, resp) { resp.pipe(streamCb(done)) } function writeKeys (done, body) { fs.writeFile('./example-keys', body, done) } function logSnark (err) { // All errors in this series end up here. if (err) throw err console.log('are we there yet?') } function serial (steps, end) { return steps.reduceRight(wrapStep, end) } function wrapStep (nextStep, thisStep) { return function (err, res) { if (err) return nextStep(err) thisStep(nextStep, res) } }
Now each step gets a done function to call with its results or error, and the results of the previous step. If an error occurs it just propagates through to the end. Serial and wrapStep should be moved into their own module (or just use async.serial or similar). Now we're talking about discrete asynchronous steps that can be reused and trivially reshuffled, with error handling in a single place.
But in this case, I really should have used streams.