The XP Rule

Extreme Programming has a simple answer to our question about the required test quantity:

"Test everything that could possibly break." [2]

Unfortunately, this sentence may initially sound like a prediction from the oracle of Delphi. If I knew everything that could possibly break, then I would naturally test it or take care not to get it wrong in the first place. The actual core of this statement is something else: each developer and each team makes different mistakes. Looking at one's own mistakes over time, you will find that they are often similar. [3] This also applies to a team, if there is a team. And then hopefully we are always learning, with the result that we will make different mistakes over the long run, sometimes fewer than before, but still mistakes. Therefore, the optimum test quantity includes only all of those tests that find or prevent our actual errors. In practice, this means that each team has to find its own optimal test level and, above all, iteratively feel its way towards the correct volume. Note that this volume can change over time. If we discover that many faults are found only after the release, we have either too few or the wrong tests. We then have to look at the bugs closely, write unit tests for them, and add test cases for recurring error types to the test suite in advance.

In contrast, there will be times when we feel that a big testing effort slows us down inappropriately, without apparent benefit from refactoring or fitting new requirements. In such a situation, we will naturally try to identify unnecessary tests for errors that never occur. [2]Jeffries[00, Ch. 34] dedicates an entire chapter to this sentence. [3]This phenomenon is described in the literature [Weinberg98].