Since I was introduced to Test Driven Development several years ago, I’m occasionally called upon to ignore the practice, and go back to doing development without writing my tests first. In this, I’ve found TDD to be like version control; once you start using it as part of your regular practice, if feels incredibly retrograde to go back to doing any development without it.

In this post, I’ll discuss the advantages of development with TDD, and the pitfalls of doing the testing afterwards. The methodology of TDD will be the subject for a later post.

You may notice that I refer to the code under development by the word “routines”. While TDD traditionally has had a close association with object orientated programming, the practice can be used with any programming model. I use this nomenclature to divorce the concept of TDD from it’s usual OOP roots.

Traditionally, unit tests are built after the implementation code. Instinctively, this seems like the only way to write tests. How can you write a test for something that doesn’t exist yet? In practice, this leads to testing the internals rather than the behaviour, as well as several other anti patterns that aren’t obvious until you are regularly doing TDD.

Many articles dismissing TDD fall victim to the Nirvana fallacy1, stating that since TDD is not a perfect for all code, it’s not worth doing anywhere. Prototyping, discovery, and building the first proof of concept can be a bad fit for TDD, and front end and rendering routines are commonly difficult to test. Retrofitting legacy procedural code for unit testing also presents unique challenges. These real problems of using TDD for some facets of development can be make TDD seem like a niche practice with little real world application. In reality, a large amount of modern coding efforts consists of many layers of code dedicated to wrangling data from one form to another, such as commonly found in line-of-business code, and internal game mechanics code, where given a specific set of defined inputs, the output should always be the same. This code is an ideal location practice TDD. Future changes are usually bug fixes and enhancements to the existing state, rather than wholesale changes. In this code, TDD gives a development team confidence in the the system’s behaviour, even as requirements evolve over time.

When developing a routine under strict TDD, the routine doesn’t exist until a test calling them routine has been written, and committed. This has an immediate effect. The developer is forced to consider how they would like to interact with the routine, rather than how they would code the internals of the routine itself. Commonly, this means that APIs are focused on being simpler and clearer for the callers, rather than simpler to implement. While this can make the implementation slightly more difficult, it also means that the code is usually more generic, and thus more re-usable. Since there’s more than one test, but only one implementation, it’s generally better hide the complexity inside the routine, rather than forcing the caller to handle it.

As a pleasant side effect, writing the unit tests first usually forces the developer to implement some form of dependency injection. While some frameworks allow doing this via reflection, most developers2 tend towards using constructor injection over time, as allows for a simpler and more robust solution by enforcing the dependencies at compile time. This tendency is one of the key features of test driven development.

Once the implementation is written, and the test is run, two things have implicitly occurred. There is now a record of code that can successfully call the routine for produce the desired result. This makes it much easier for programmers who work on the code in the future to understand how to interact with the routine. For this reason, it’s important to not make the tests too abstract, as this impairs readability of the tests in the future.

Secondly, by having a failing test first, and writing the implementation second, a proof is generated that this implementation causes the desired change. This may seem like an inconsequential effect, but as the number of scenarios handled by a given routine increases, it becomes more common to discover that a given change does not cause the expected change, or that the required output already occurs. This is also helpful in catching mistakes in the test code.

Without this practice, several anti-patterns commonly occur. The writing of unit test happens post-implementation, and tend to only cover the core scenario, or “happy path”. Edge cases are commonly forgotten or skipped. Even when there’s buy in to thoroughly test all the scenarios, doing so requires running and maintaining a code analysis tool, rather simply changing the order in which the developers write implementation and test code in to get the same effect.

More commonly, unit testing is skipped entirely, and a set of manual tasks is used to confirm the feature works as expected. Unfortunately, the further down the stack a given piece of code is, the more likely that it’s various edge cases are not reached during manual testing. As the feature is seen to be working, further work to confirm correctness is often seen as a waste of time. As the code base grows, and a team evolves over time, the working memory of how to test all the edge cases of a feature is lost, and manual testing moves on to the new features in the application. The effects of this are usually not felt till much later, when some crucial, but not-recently-tested feature fails, usually at the worst possible time.

With proper automated tests and TDD, you can be sure that not only do the core path work today, but that the edge cases still work as expected in the future, and that future changes that impact on the less common cases will be noticed before they impact your customers.


Further reading:

Test Driven Development: By Example, by Kent Beck - The seminal work on TDD.

Ian Cooper: TDD, where did it all go wrong - Lessons and pitfalls in TDD. Reccomended watching.


  1. The nirvana fallacy is a type of argument that states that because something isn’t perfect, it’s therefore useless. See Wikipedia. ↩︎

  2. I’ve yet to see anyone consciously decide that construction injection is wrong, and instead use setter injection, or worse yet, field injection. But at least three (Alef Arendsen, 2007) notable (Steve Schols, 2012) programmers (Petri Kainulainen, 2013) decided that constructor injection was the most correct way. ↩︎