Test-driven development: Difference between revisions

Content deleted Content added
No edit summary
Updated TDD cycle to add "create a list of all tests" as described by Kent Beck
Line 20:
== Test-driven development cycle ==
[[File:TDD Global Lifecycle.png|thumb|A graphical representation of the test-driven development lifecycle]]
The following sequence is based on the book ''Test-Driven Development by Example'':<ref name=Beck>{{cite book |last=Beck| first=Kent |title=Test-Driven Development by Example |publisher=Addison Wesley |___location=Vaseem |date=2002-11-08 |isbn=978-0-321-14653-3}}</ref>, amended by Kent Beck's [https://tidyfirst.substack.com/p/canon-tdd Canon TDD article]
 
;1. Create a list of tests for the new feature
;1. Add a test
:The addinginitial ofstep in TDD, given a system & a desired change in behavior, is to list all the expected variants in the new featurebehavior. begins“There’s bythe writingbasic acase test& thatthen passeswhat if this service times out & what if the feature'skey specificationsisn’t arein met.the database yet &…” The developer can discover these specifications by asking about [[use case]]s and [[user story|user stories]]. A key benefit of test-driven development is that it makes the developer focus on requirements ''before'' writing code. This is in contrast with the usual practice, where unit tests are only written ''after'' code.
;2. Add one test from the list
;2. Run all tests. The new test ''should fail'' for expected reasons
:Write an automated test that passes if the variant in the new behavior is met.
;23. Run all tests. The new test ''should fail'' for expected reasons
:This shows that new code is actually needed for the desired feature. It validates that the [[test harness]] is working correctly. It rules out the possibility that the new test is flawed and will always pass.
;34. Write the simplest code that passes the new test
:Inelegant or [[hard code]] is acceptable, as long as it passes the test. The code will be honed anyway in Step 56. No code should be added beyond the tested functionality.
;45. All tests should now pass
:If any fail, the new code must be revised until they pass. This ensures the new code meets the [[Software requirements|test requirements]] and does not break existing features.
;56. Refactor as needed, using tests after each refactor to ensure that functionality is preserved
:Code is [[Code refactoring|refactored]] for [[Code readability|readability]] and maintainability. In particular, hard-coded test data should be removed. Running the test suite after each refactor helps ensure that no existing functionality is broken.
:*Examples of refactoring:
Line 38 ⟶ 40:
:** splitting methods into smaller pieces
:** re-arranging [[Inheritance (object-oriented programming)|inheritance hierarchies]]
;7. Add the next test on the list
 
:Repeat the process with each test on the list until all tests are implemented and passing.
;Repeat
:The cycle above is repeated for each new piece of functionality. Tests should be small and incremental, and commits made often. That way, if new code fails some tests, the programmer can simply [[undo]] or revert rather than [[debug]] excessively. When using [[Library (computing)|external libraries]], it is important not to write tests that are so small as to effectively test merely the library itself,<ref name=Newkirk /> unless there is some reason to believe that the library is buggy or not feature-rich enough to serve all the needs of the software under development.