Planning an effective testing process involves a variety of different testing approaches, each with their own goals, advantages and disadvantages. In this post I will cover the most common stages a software system testing usually undergo:
Written by the developer concurrently with writing the code, these tests are focused on the smallest element of executable code (typically a method or property in Object Oriented languages). Depending on the organization’s expectations for software development, unit testing might include static code analysis, data flow analysis, metrics analysis, peer code reviews, code coverage analysis and other software verification practices.
With their fine granularity, they provide pinpoint identification of the code triggering the failure.
The concurrent development nature provides immediate validation and capturing of the developer’s intentions.
Being written by the same person, if there is any misunderstanding of the “specification”, that is likely to be reflected in both the program code and the test code.
In the most general sense, these tests verify that the work product will be acceptable to the people who will be consuming it. The two most common types of acceptance tests are User Story Acceptance Tests (typically defined in the “Definition of Done” for the User Story) and User Acceptance Testing.
Since the tests are defined by the person who will be accepting the work, these test provide a closed feedback loop that mitigates communication errors.
Specifically referring to UAT, there is often a large time gap between the development of the code and the execution of the tests. This is likely to increase the costs for fixing any defects found compared to tests which are executed in a more timely fashion.
These tests are utilized to validate that the individual elements work properly in conjunction with other elements, without requiring an entire system to be in place.
Provides increasing strategic value in validating functionality.
Becomes increasingly difficult to pinpoint exact cause of failure, unless prior tests have validated functionality at a more granular level.