Test-driven development is a well-proven best practice and it has come to stay. However, it is by no means a panacea and too many unit tests may even be prejudicial to a big project. Besides, I see the risk that too much attention may be given to them at the expense of other best practices and even other types of tests.
It has been a while since unit tests and test-driven development have hit mainstream. The idea is very tempting: write a small snippet of runnable code that tests a very basic functionality your system is supposed to offer before implementing this functionality in the first place. Run the test and watch it fail. Then, implement this functionality until, and not beyond that, your test succeeds. Implement your system this way step-by-step, one basic functionality after the other and be happy. But, for big projects, this has consequences. Code is liability – someone already said – and test code is still code. Code that must be understood (ok, unit tests should be as clean and easy as possible, but it doesn’t always work that way) and maintained. How about when you make a major refactoring in your architecture? It’s a natural process in a system. Requirements change. Sooner or later you’ll have to extract part of the logic of a growing method into another class or even a whole new set of them. Sometimes you’ll remove a functionality which is not needed anymore or simply wrong according to new requirements. And then you’ll frown upon the hundreds of tests you once wrote, which are valuable since they do ensure that some necessary business logic still works. You’ll find yourself either adapting them, if they were really simple, well-written and relating to a good extent to the new implementation, or just throwing them away and trying to define completely new tests that hopefully will have at least the same business logic testing coverage the older ones had.
What about fast implementation? It’s not always easy to write code test-driven for a completely new requirement, because writing tests before production code demands manipulating something which doesn’t exist and this way of thinking, although healthy for exercising the brain and one’s programming practice, may get in the way of an urgent new feature. At the end of the day, did your unit tests really save your neck? Nice code is beautiful and theoretically scalable, but who cares? In the end, only the results matter and it boils down to: Can you and your team change your program with less effort? Has your program proven to be more reliable to its users/customers? I’m not sure if fastidiously developing test-driven and carefully applying all unit-tests good practices is enough or even necessary to achieve this. If you’re a good programmer, you will take care of writing good programming units, but while doing this, it’s so easy to miss the big picture. Bugs will not arise from your local code, but from the interaction between this local code and other code (communication with services; components; data emerging from user interaction). Besides, religiously focusing on unit-testability will make you turn down design approaches which are elegant, nice to understand and change, but don’t fit into this wonderful unit-tests utopian world.
Unit tests are very close to implementation detail.
Consider integration tests
Contrary to unit tests, integration tests don’t enjoy the same sexy reputation of unit tests. They’re harder to write; take more time to run and depend on many real components (not mock objects). So, even when they fail, the failure cause is not obvious as opposed to unit tests. However, once you set up an integration test, it will take much longer to touch it again compared to unit tests. Removing an integration test altogether will be even rarer. When a new requirement comes into play, you probably will need just to adjust the corresponding integration test by defining a new “arrange”, “act” or “assert” step. If you have a good integration tests base, you have a stronger safety net that gives you a lot of flexibility to change your code. Think about upgrading the framework you use to a new, state-of-the-art, major version of it. For example, upgrading your ASP.NET MVC 3 application to ASP.NET 1.0 Core. It may even seem straight-forward if you make a careful analysis of the relationships between architectural points of one version of the framework to the other. But, how do you guarantee that everything is working as expected? And ever heard of analysis paralysis? Think about refactoring a part of the system into a design pattern which makes sense to apply to your code today. Of course, all these considerations are based on the assumption that your software is expected to have high reliability. By the way, reliability is, in my opinion, the greatest asset of a software. Everybody hates software that crashes or that gets worse with new updates. Everybody also hates slow software etc. But software is a means to an end. And users will love a piece of software which won’t let them down.
Integration tests are very close to domain-level requirements.