Bioinformatics Zen

A blog about bioinformatics and mindfulness by Michael Barton

Software development, the happy path, and confirmation bias

The "happy path" in software development describes designing software for scenarios where everything goes as expected. The user provides the data in exactly the right format, the incoming message body doesn't contain any unusual characters, no two processes ever try to update the same database field at the same time. The term "happy path" is used derogatorily - the implied meaning is the team behind the project are guilty of intellectual laziness - if they had just done their homework and thought about the problem this wouldn't have happened.

I've recently read two books that I found extremely interesting and exposed me to this topic for the first time: Black Box Thinking, and the Field Guide to Understanding Human Error. One of the messages expressed in both of these books is that identifying the causative error is easy to do hindsight but difficult in foresight. An analogy, paraphrased from black box thinking, would be a hypothetical plane crash due to fuel running out:

The first scenario might first seem far fetched and contrived however how many times have we assumed the reason another project was delayed or buggy was because the team working on it was simply "stupid" or "incompetent"? I think that in many situations this is the real intellectually laziness. This is "confirmation bias" we look for reasons that confirm what we already think - we are good developers or scientists and we would never make these kind of mistakes or have these kind of problems. However it's easy to blame and make these kind of generalisations about other people's competency, but for the team responsible for the project there may have been a tight deadline, changing requirements, a lack of developers, or that the underlying assumptions were wrong. All of these reasons can cause a project to have problems but tend to be much hard to explain compared with simply blaming.

After reading both of the books I mentioned above, I believe if you really care about developing good software or about doing good research resorting to blaming is unacceptable. Instead if we care about learning from our mistakes we must always ask why the problems happened. We must dig down into what went wrong, learn from it and update own processes. Terms like "stupid" or "incompetent", or "these things sometimes happen" means that we'll never learn from our own, or others mistakes and therefore never improve.

Addendum

What I've written so far is very hand-wavy so here so here are what I think are some actual concrete suggestions you could try for a research or software project:

I'd also recommend a couple of really exceptional books related to the topics which I've only been able to briefly cover in a blog post. I found these all excellent reads: