In test-driven development, writing tests before implementation raises questions about testing [[access modifiers|private methods]] versus testing only through [[Interface (computing)|public interfaces]]. This choice affects the design of both test code and production code.
===Test isolation===
===Fakes, mocks and integration tests===
Test-driven development relies primarily on [[unit testing|unit tests]] for its rapid red-green-refactor cycle. These tests execute quickly by avoiding process boundaries, network connections, or external dependencies. While TDD practitioners also write [[integration testing|integration tests]] to verify component interactions, these slower tests are kept separate from the more frequent unit test runs. Testing multiple integrated modules together also makes it more difficult to identify the source of failures.
Unit tests are so named because they each test ''one unit'' of code. A complex module may have a thousand unit tests and a simple module may have only ten. The unit tests used for TDD should never cross process boundaries in a program, let alone network connections. Doing so introduces delays that make tests run slowly and discourage developers from running the whole suite. Introducing dependencies on external modules or data also turns ''unit tests'' into ''integration tests''. If one module misbehaves in a chain of interrelated modules, it is not so immediately clear where to look for the cause of the failure.
When code under development relies on a database, a web service, or any other external process or servicedependencies, enforcingTDD aencourages unit-testablethe separationuse isof also[[test an opportunity and a driving forcedouble]]s to design moremaintain modularfast, moreisolated testableunit and more reusable codetests.<ref>{{cite book |title=Refactoring - Improving the design of existing code |last=Fowler |first=Martin |year=1999 |publisher=Addison Wesley Longman, Inc. |___location=Boston |isbn=0-201-48567-2 |url=https://archive.org/details/isbn_9780201485677 }}</ref> TwoThe stepstypical areapproach necessary:involves using interfaces to separate external dependencies and implementing [[Test_double#Implementation_approaches|test double]]s for testing purposes.
Since test doubles don't prove the connection to real external components, TDD practitioners supplement unit tests with [[integration testing]] at appropriate levels. To keep execution faster and more reliable, testing is maximized at the unit level while minimizing slower tests at higher levels.
# Whenever external access is needed in the final design, an [[Interface (computer science)|interface]] should be defined that describes the access available. See the [[dependency inversion principle]] for a discussion of the benefits of doing this regardless of TDD.
# The interface should be implemented in two ways, one of which really accesses the external process, and the other of which is a [[mock object|fake or mock]]. Fake objects need do little more than add a message such as "Person object saved" to a [[Tracing (software)|trace log]], against which a test [[Assertion (computing)|assertion]] can be run to verify correct behaviour. Mock objects differ in that they themselves contain [[Assertion (computing)|test assertions]] that can make the test fail, for example, if the person's name and other data are not as expected.
Fake and mock object methods that return data, ostensibly from a data store or user, can help the test process by always returning the same, realistic data that tests can rely upon. They can also be set into predefined fault modes so that error-handling routines can be developed and reliably tested. In a fault mode, a method may return an invalid, incomplete or [[Null character|null]] response, or may throw an [[Exception handling|exception]]. Fake services other than data stores may also be useful in TDD: A fake encryption service may not, in fact, encrypt the data passed; a fake random number service may always return 1. Fake or mock implementations are examples of [[dependency injection]].
A test double is a test-specific capability that substitutes for a system capability, typically a class or function, that the UUT depends on. There are two times at which test doubles can be introduced into a system: link and execution. Link time substitution is when the test double is compiled into the load module, which is executed to validate testing. This approach is typically used when running in an environment other than the target environment that requires doubles for the hardware level code for compilation. The alternative to linker substitution is run-time substitution in which the real functionality is replaced during the execution of a test case. This substitution is typically done through the reassignment of known function pointers or object replacement.
Test doubles are of a number of different types and varying complexities:
* [[Dummy code|Dummy]] – A dummy is the simplest form of a test double. It facilitates linker time substitution by providing a default return value where required.
* [[Method stub|Stub]] – A stub adds simplistic logic to a dummy, providing different outputs.
* Spy – A spy captures and makes available parameter and state information, publishing accessors to test code for private information allowing for more advanced state validation.
* [[Mock object|Mock]] – A mock is specified by an individual test case to validate test-specific behavior, checking parameter values and call sequencing.
* Simulator – A simulator is a comprehensive component providing a higher-fidelity approximation of the target capability (the thing being doubled). A simulator typically requires significant additional development effort.<ref name="Pathfinder Solutions" />
A corollary of such dependency injection is that the actual database or other external-access code is never tested by the TDD process itself. To avoid errors that may arise from this, other tests are needed that instantiate the test-driven code with the "real" implementations of the interfaces discussed above. These are [[Integration testing|integration tests]] and are quite separate from the TDD unit tests. There are fewer of them, and they must be run less often than the unit tests. They can nonetheless be implemented using the same testing framework.
Integration tests that alter any [[persistent storage|persistent store]] or database should always be designed carefully with consideration of the initial and final state of the files or database, even if any test fails. This is often achieved using some combination of the following techniques:
* The <code>TearDown</code> method, which is integral to many test frameworks.
* <code>try...catch...finally</code> [[exception handling]] structures where available.
* [[Database transactions]] where a transaction [[atomicity (database systems)|atomically]] includes perhaps a write, a read and a matching delete operation.
* Taking a "snapshot" of the database before running any tests and rolling back to the snapshot after each test run. This may be automated using a framework such as [[Apache Ant|Ant]] or [[NAnt]] or a [[continuous integration]] system such as [[CruiseControl]].
* Initialising the database to a clean state ''before'' tests, rather than cleaning up ''after'' them. This may be relevant where cleaning up may make it difficult to diagnose test failures by deleting the final state of the database before detailed diagnosis can be performed.
===Keep the unit small===
|