- refactoring code breaks lots of unit tests (which goes against the premise of refactoring),
- tests make people less productive.
Ian Cooper states that the original ideas where misunderstood and those misunderstandings where further replicated in numerous TDD/unit-testing tutorials that can be found all over the Internet. In almost all of those, people suggest to write a test fixture per each class and then a test for each method of the class under test. Also you should mock all dependencies to test in isolation... and that's where it all went wrong.
Then Ian Cooper presents the Zen of TDD:
Avoid testing implementation details, test behaviors.What is a unit test?
Unit test is a test that runs in isolation.Nothing more, nothing less. This means that a unit test can touch database, file system, etc., as long as it is isolated from the other tests. The reason why we usally mock or fake external services and interprocess communicationare is because it's hard to fulfill the isolation requirement (and it's slow). But even with mocks/fakse, a unit test will most probably go through the different layers of the system, since it's behaviour is rarely expressed by a single class.
A test-case per class aproach fails to capture the ethos of TDD, because adding a new class is not the trigger for writing a tests - the trigger is implementing a requirement (a behaviour). This leads to far fewer unit-tests and gives freedom of switching the implementation.
One can also think about unit-tests as testing the API of the modules. Tests should tell a story about the API - tell what the system does. One of the reasons behind the red-green-refactor cycle (writing dirty solution first) is that it will let the programmer create genuinely correct design when refactoring - even if new public classes are introduced, they don't need new unit-tests.
New unit tests are a great assistant when refactoring or writing a new implementation. But then they can be freely deleted after the fact, as they will inhibit future refactorings.
When testing implementation, mocks become a huge problem, since they know too much about the implementation. This means that refactoring makes the build go red and soon the redness gets accepted as status quo. As a result, people lose trust in unit-tests. This leads to the testing ice-cream cone antipattern: few unit tests, quite some slow "integration-tests" and a lot of manual or UI-driven tests.
And by the way, "integration-test" is a scam.
Then Ian Cooper goes to give some more tips about putting it all together. First, he presents the hexagonal architecture, more recently known as ports and adapters: the idea is that the core logic of the system is separated from the infrastructure. The boundary between the domain and the outside world is called a port (incoming or outgoing). This disconnects the domain from technology. Adapters do the job of translating from framework specific interface (database, REST, SOAP, message queue, UI, etc). to the interface provided by the ports. The key idea here is that unit-test is just another adapter. Adapters can be mocked or faked and a small amount of integration tests can be put in place to confirm proper hookup of ports and adapters. Going back to unit-testing, the key here is to develop and accept against tests written on a port.
At the end, there are also some tips about evident test data, namely the suggestion to use the Builder pattern for building setup data.
Here's the presentation from NDC 2013:
Ian Cooper: TDD, where did it all go wrong from NDC Conferences on Vimeo.
(Here's the presentation on InfoQ).
Looks like the guys behind Google Code came to the same conclusions: