Fundamentals of Testing

edit
  • Effect of an Error
Error --> Defect --> Failure

Consequences of Software Failure

edit
  • Incorrect software can harm
    • People - aircraft crash, hospital life support system
    • Companies - incorrect billing, loss of money
    • Environment - releasing chemicals or radiation
  • It can lead to
    • Loss of money
    • Loss of time
    • Loss of business reputation
    • Injury / Death

Keeping Software Under Control

edit
  • Exhaustive Testing is not possible
    • What we test and how much we test is dependant to the risk (Greater risk = more testing)
  • Resources Triangle:
 Time Quality Money
  • Tests are sorted by their importance
    • The ones that test the most important functionality of the system
    • So at any time you know that the most important ones have already been achieved

What Testing Is and Does

edit
  • Debugging is going through to identify the cause of bugs or defects in the code and undertake corrections (Developers)
    • Doesn't check that other areas haven't been affected
  • Testing is an exploration of a component with the aim of finding and reporting defects
    • Does not include correcting the defects
    • Does test if other behaviours of the system have been affected
  • Static Testing - Where the code is not exercised
    • Try to find errors as early as possible (cheaper than fixing defects or failures)
    • Removing ambiguities and errors from specification
  • Dynamic Testing - Exercises the program using some data
    • Test execution
  • The aim of testing is to find defects
    • A test that doesn't find defects is consuming resources but not adding anything

General Testing Principles

edit
  • These are:
    • Testing shows presence of defects
  • Exhaustive testing is impossible
  • Early testing
  • Defect clustering
  • Pesticide Paradox
  • Testing is context depending
  • Absence of errors fallacy
 Exhaustive Testing - a test approach in which all possible data combinations are used. 
 This includes implicit data combinations present in the state of the software/date at the start of testing.
  • Early Testing is exercised so in the case of time constraints testing is not compromised upon
  • As soon as Work-Products are created throughout the SDLC, testing can be performed
  • Stages at which errors can be found:
    • Requirements ($1)
    • Coding ($10)
    • Program Testing ($100)
    • UAT ($1,000)
    • Live Running ($10,000)

Defect Clustering

edit
  • Usually all defects are located at the same area of the software
  • Reasons for this include:
    • System complexity
    • Volatile Code
    • Effect of change upon change
    • Development staff experience
    • Development staff inexperience
  • Pareto Principle - ~80% of problems are found in about 20% of the modules
  • Testing still has to concentrate on all areas

Pesticide Paradox

edit
  • Running the same test cases over and over again will not find new defects
  • Test cases should be regularly changed and new tests need to be added

Testing is Context Dependent

edit
  • Testing depends on the functionalities of the system
  • Risk is a large factor for determining the type of testing needed

Absense of errors fallacy

edit
  • Software that doesn't have any known errors doesn't mean it's ready to be shipped

Fundamental Test Process

edit
  • The steps in testing are
    • Test Planning and Control
    • Test Analysis and Design
    • Test Implementation and Execution
    • Evaluating Exit Criteria and Reporting
    • Test Closure Activities

Planning and Control

edit
  • Determine what is going to be tested and how this will be achieved
  • Define test completion criteria to determine when testing is finished
  • Control is determining what to do when activities do not match up with plans
  • Performed throughout the 5 testing phases

Analysis and Design

edit
  • Fine detail of what to test (test conditions)
  • How to combine test conditions into test cases so that a small number of test cases can cover many conditions
  • Bridge stage between planning and executing tests
  • Key parts include
    • Reviewing requirements, architecture design, interface specs
    • Analyse test items and determine test data required
    • Designing the tests incl priorities

Implementation and Execution

edit
  • Running the actual tests (incl. set up / tear down activities)
  • Comparison between expected and actual outcomes are logged (any discrepancies need to be investigated)
  • Test incidents need to be raised
  • retesting - tests need to be run to make sure the problem has been fixed
  • regression testing - to make sure it didn't cause problems elsewhere
  • Key parts include
    • Developing and prioritising test cases, creating test data
    • Collecting test cases into test suites
    • Checking the test environment is set up correctly
    • Keeping a log of testing activities
    • Comparing actual results with expected results

Evaluating Exit Criteria and Reporting

edit
  • Checking whether the previously determined exit criteria have been met
  • Determining if more tests are needed or if the exit criteria need amendments
  • Writing up the result of the testing activities for business sponsors and other stakeholders

Test Closure Activities

edit
  • Ensuring that documentation is in order (what has been delivered is defined
  • Closing down test infrastructure and testware
  • Passing over testware to maintenance team
  • Writing down lessons learned for the future

The Philosophy of Testing

edit
  • Testing is generally more efficient if it is not done by the individual(s) who wrote the code
  • While developers' aim is to make something that works, testers aim is to 'break it'
  • Testing should be undertaken by (in ascending order of preference):
    • Those who wrote the code
    • Members of the same development team
    • Members of a different group (test team)
    • Members of a different company (testing consultancy / outsourcing)
  • Communication is key to testing; it needs to be objective and impersonal:
    • Keep the focus on delivering a quality product (not work 'against' developers)
    • Address the product, not the person
    • Understand how others feel
    • Confirm both parties have understood and been understood

Code Of Ethics

edit

Applies to the following areas:

  • Public - consider the wider public interest in their actions
  • Client and Employer - act in the best interest of their client/employer
  • Product - ensure that the product meets the highest professional standards possible
  • Judgment - maintain integrity and independence in their professional judgment
  • Management - subscribe to and promote an ethical approach to the management of software testing
  • Profession - advance the integrity and independence in their professional judgment
  • Colleagues - be fair to and supportive of their colleagues, and promote cooperation with developers
  • Self - participate in lifelong learning regarding the practice of their profession, and promote an ethical approach to the practice of the profession

Life Cycles

edit

Software Development Models

edit
  • In the waterfall model, testing is carried out once the code has been fully developed
  • In this case the testing acts as a quality check to accept or reject the overall product
  • Rejecting the product at this stage will mean a huge amount of resources and finances were wasted
  • Checks are made throughout the life cycle:
    • Verification - checks that the work-product meets the requirements set out for it (helps ensure the product is being built in the right way)
    • Validation - checks the product against user needs (helps ensure that we are building the right product)

V-Model for Software Development

edit
  • Requirement Specification...............Acceptance Test Planning.................Acceptance Testing...
  • ..Functional Specification................System Test Planning.................System Testing.........
  • ....Technical Specification.............Integration Test Planning............Integration Testing......
  • ......Program Specification................Unit Test Planning...............Unit Testing...............
  • .................................................Coding...........................................


  • Requirement Specification - captures user needs
  • Functional Specification - defines functions required to meet user needs
  • Technical Specification - Technical design of functions identified in Functional Specification
  • Program Specification - Detailed design of each module or unit to be built to meet required functionality


  • The specifications could be reviewed for:
    • That it is testable - there is sufficient detail provided
    • That there is sufficient detail for the subsequent work-products
    • Conformance to the previous work-product

Iterative Development Model

edit
 Requirements → Design → Code → Test
  • Also known as 'Cyclical'
  • User is involved in the testing --> Reduces the chance of making an unsatisfied product
  • However, this makes can cause problems:
    • Lack of documentation makes it more difficult to test (can be countered with TDD)
    • Changes may not be formally recorded
    • Requires a lot of regression testing as each work product is created after the other

Test Levels

edit
  • Characteristics of good testing across the SDLC include:
    • Early Test Design - start with the documents
    • Each Work-Product is tested - In the V-Model each document on the left is tested by an activity on the right
    • Testers are involved in reviewing the requirements before they are released - Testers should be invited to review the documents from a testing perspective

Unit Testing

edit
  • Tests that a code written for the unit meets its specification prior to its integration with other units
  • Also tests that all the code written can be executed
  • Test bases for unit testing include component requirements, the detailed design, the code itself
  • Performed by the developer who wrote the code (and specification)
  • e.g. TDD

Integration Testing

edit
  • To expose defects in the interactions between integrated components or systems (once the units are put together)
  • Test bases for integration testing involve software and system design, diagram of system architecture, se cases, workflow
  • There are 3 ways in which the system can be put together:
    • Big Bang Integration - all units are linked at once completing the system, generally regarded as poor practice
    • Top-Down Integration - system is built in stages, starting with components that call other components, stubs (like mock objects) commonly used
    • Bottom-Up Integration - system is built in stages, starting with components that are called upon, requires drivers
  • There could be more than one level of integration testing
    • Component Integration Testing - focuses on interactions between software components, done after unit testing (by developers)
    • System Integration Testing - focuses on interactions between different systems, done after system testing (by testers)

System Testing

edit
  • Tests the functionality from an end-to-end perspective
  • Functional Testing - test the functions of a system
  • Non-Functional Testing - more generic requirements, examples include:
    • Installability - installation procedures
    • Maintainability - ability to introduce changes to the system
    • Performance - expected normal behaviour
    • Load Handling - behaviour of the system under increasing load
    • Stress Handling - behaviour at the upper limit of system capability
    • Portability - use on different operating platforms
    • Recovery - recovery procedures on failure
    • Reliability - ability of the software to perform its required functions over time
    • Usability - ease with which users can engage with the system

Acceptance Testing

edit
  • allow the end user to test the system
  • based on user requirements
  • UAT
    • Testing by user representatives to check that the system meets their business needs
    • Usually done before it's moved to the user's site and then again at their site (site acceptance testing)
  • Operational Acceptance Testing
    • involves checking that the processes and procedures are in place to allow the system to be used and maintained
    • e.g. back-up facilities, disaster recovery, maintenance procedures, security procedures
  • Contract and Regulation Acceptance Testing
    • Testing against certain criteria that the contract says, need to be tested before the system is accepted
    • In some industries systems need to meet governmental, safety or legal standards (banks, defence, pharmaceutical)
  • Alpha and Beta Testing
    • Alpha takes place at developer's site, by internal staff before release to customer (still independent of development team)
    • Beta is at customer's site, by group of customers who provide feedback before system is released (aka field testing)

Test Types

edit

Functional Testing

edit
  • Looks at specific functionality of a system
  • Also tests interoperability testing - evaluates the capability of the system to interact with other specified components
  • Performed against the specification

Non-Functional Testing

edit
  • Behavioural aspects of the system are tested (usability, portability, performance against load/stress)
  • Performed against the quality model (ISO 9126 Software Engineering - Software Product Quality)

Structural Testing

edit
  • Measures how much testing has been carried out (in functional testing, this could be how many functional requirements are tested)
  • Measure how much of the actual code has been measured
  • Can be carried out at any test level
edit
  • After defects are fixed, retesting needs to be done to test that the defect has been fixed
  • Regression Testing is also required to see if it had an impact on other areas of the system
    • This should also be carried out if the environment changes

Maintenance Testing

edit
  • If changes are made after the system is running live, regression testing needs to be done
  • Reasons for change may be:
    • Additional features being required
    • The system being migrated to a new operating platform
    • The system being retired - data may need to be migrated or archived
    • New faults being found requiring fixing (could be hotfixes)

Static Testing

edit
  • Review is used to remove errors and ambiguities from the documents
  • Static Analysis is aimed at analysing code for structural defects or systematic programming weaknesses that may lead to defects

Reviews

edit
  • Reviews help finding defects early in the SDLC, which can save cost and time as well as:
    • Development Productivity can be improved, if defects are found early, there will be fewer errors to find and fix during test execution
    • Testing time and cost can be reduced as the tester won't need to wait for the defects to be fixed by the developer later on
    • Improved communication as authors and their peers discuss and refine ambiguous content
  • Most commonly found defects in reviews are
    • Deviations from standards (either internally defined or regulatory/legally defined)
    • Requirement defects - ambiguous requirements or missing elements
    • Design defects - e.g. design does not match the requirements
    • Insufficient maintainability - e.g. the code is too complex to maintain
    • Incorrect interface specifications - e.g. interface specification does not match the design or the receiving / sending interface
  • Review Objectives can usually be:
    • Finding defects
    • Gaining understanding
    • Generating discussion
    • Decision making by consensus
  • All reviews exhibit the followings process:
    • The document under review is studied
    • Reviewers identify issues or problems and inform the author (annotating document or writing a review report)
    • The author decides on actions to take and updates the documents

Formal Review

edit
  • These can be technical reviews or inspections
  • Consist of 11 steps:
    • Planning - selecting personnel, allocating roles, defining entry/exit criteria, selecting parts of document to be reviewed
    • Kick-Off - distributing documents, explaining objectives to participants, checking entry criteria
    • Checking Entry Criteria - check whether the entry criteria from Kick-Off have been met
    • Individual Preparation - work done by each participants individually before the review meeting, reading source documents, noting defects, questions, comments
    • Noting Incidents - logging the questions, defects, comments found in Individual Preparation are logged
    • Review Meeting - discussions/logging of defects, depending on formality of review (inspection never has discussion)
    • Examine - recording of the physical meeting
    • Rework - the process of the author correcting the defects
    • Fixing defects - the author fixing the defects
    • Follow-Up - check that the agreed defects have been addressed
    • Checking Exit Criteria - exit criteria defined in Planning are checked to ensure they have been met

Roles and Responsibility

edit

The following are the key roles and responsibilities of a review process:

  • Manager - decides on what is to be reviewed, ensures there is sufficient time, determine if review objectives have been met
  • Moderator (review leader) - leads the review of the document(s) including planning the review, running the meeting, and follow ups afterwards, makes final decision on whether to release review
  • Author - writer of the documents, also fixes the defects
  • Reviewers (checkers / inspectors)- individuals with specific business/technical background, identify and describe findings (defects), should be chosen to represent different perspectives
  • Scribe (recorder) - attends the review meetings, documents all issues and defects
  • Testers - analyse document to enable the development of tests

Types of Review

edit

The following are the different types of reviews (from low formality to high formality) and their key characteristics:

  • Informal
    • no formal process
    • rarely documented
    • main purpose is to find defects
    • may be implemented by 'pair programming' (where one programmer reviews the code of the other 'pair programmer')
  • Walkthrough
    • meeting is led by the author
    • review sessions are open-ended
    • preparation before review meeting, review reports, and list of findings are optional
    • main purpose is to enable learning about the document contents
    • typically explore scenarios or dry run of code
  • Technical Review
    • always documented and use a well-defined defect detection process
    • led by trained moderator and performed as a peer review without management participation
    • reviewers prepare for the meeting and review report is produced with list of findings
    • main purpose can be discussion, decision making, evaluation of alternatives, finding defects, solving technical problems
  • Inspection
    • led by trained moderator (who is not the author)
    • very formal, based on rules and checklists, uses entry/exit criteria
    • pre-meeting preparation essential
    • inspection report with list of findings is produced
    • formal follow-up process is used after meeting
    • main purpose is to find defects, and process improvement

Success Factors For Reviews

edit
  • Each review should have clearly predefined and agreed objective and the right people
  • Any defects found are welcomed and expressed objectively
  • Reviews should be conducted within an atmosphere of trust
  • Review techniques that are suitable to type and level of software work-products
  • Checklists or roles should be used to increase effectiveness
  • Management support is essential (e.g. having adequate time for the review process)
  • Emphasis on learning and process improvement
  • Other quantitative approaches could be:
    • How many defects found
    • Time taken to review
    • Percentage of project budget used/saved

Static Analysis by Tools

edit
  • look for defects once the code has been written (without executing the code)
  • most useful during integration and component testing
  • objective is to find defects in software models and source code
    • software model can be something like a UML


The value for static analysis is:

  • Early detection of defects prior to test execution
  • Early warning about suspicious aspects of the code or design (if code is too complex and more prone to error)
  • identifying defects not easily found in dynamic testing (e.g. development standard breaches)
  • Improved maintainability of code and design (reduction in amount of maintenance after 'go-live')
  • Prevention of defects, easier to identify why a defect was there than during test execution

Typical defects discovered by static analysis include:

  • Variable with undefined value (using a variable in calculation before it has been given a value)
  • Inconsistent interface between modules and components (module X requests 3 values from Y, which has only 2 outputs)
  • Variables that are never used
  • Unreachable (Dead) Code (lines of code that cannot be executed because of the code logic)
  • Programming standards violation (if standard is to add comments at the end, but there are notes throughout the code)
  • Security vulnerabilities (insecure password structures)
  • Syntax violations of code and software models (incorrect use of programming or UML)

Test Design Techniques

edit

The Test Development Process

edit

The design of tests comprises three main steps:

  • Identify Test Conditions
    • an item or event of a component or system that could be verified by one or more test cases
    • something that can be tested with a test case
    • e.g. function, transaction, feature, quality attributed, or structural element
  • Specify Test Cases
    • a set of input values, execution preconditions, expected results, and postconditions for a particular test condition / objective
  • Specify Test Procedures
    • a sequence of actions for the execution of a test

The Idea of Test Coverage

edit
  • Provides a quantitative assessment of the extent and quality of testing
  • Provides a way of estimating how much more testing needs to be done, allows us to set targets and measure progress against these
  • Coverage measures may be part of the completion criteria defined in the test plan

Categories of Test Case Design Techniques

edit

There are 3 different categories of test case design techniques:

  • Those based on deriving test cases directly from a specification
    • aka specification-based or black-box techniques
    • can be functional and non-functional
    • don't test the internal structure of the system
  • Those based on deriving test cases from the structure of a component / system
    • aka structure-based or white-box techniques
    • concentrated on the code
  • Those based on deriving test cases from tester's experience
    • aka experience-based techniques
    • could be from similar systems or general ones

Specification-Based (Black Box) Techniques

edit
  • Defined as documented procedure to derive and select test cases based on an analysis of the specifications
  • can be functional or non-functional
  • for a component/system without reference to its internal structure
  • aka Specification based Testing

There are 5 main specification based testing techniques.

Equivalence Partitioning

edit
  • Grouping an application's inputs in groups of similar inputs
  • reduces the number of test cases needed
  • e.g. test for different ranges: 0-25, 26-49, 50+
  • each can have valid and invalid data
  • involves input partitioning and output partitioning (valid/invalid input and output values)

Boundary Value Analysis

edit
  • Related to equivalence partitioning
  • Looking at values at the boundaries:
  • e.g. -1, 0, 1, 24, 25, 26

Decision Table Testing

edit
  • Used to test complex business rules
  • A list of all the input conditions that can occur
  • Lists all the actions that can arise from these inputs
  • Includes all possible combinations of inputs

State Transition Testing

edit
  • Tests the functionality of all possible states of the test object
  • e.g. ATM: card entered, waiting for pin, pin entered, waiting for validation, valid pin, waiting for transaction choice, transaction entered, ....
  • could be like the state of a TV changing from off to on when the power button is pressed
  • In a state transition diagram, a circle stands for a state, an arrow stands for the transition

Use Case Testing

edit
  • Describes interactions between users and the system
  • Generates acceptance tests
  • Describes the process flow through the system based on likely use

White Box Testing

edit
  • Documented procedure to derive and select test cases based on analysing the internal structure of a component/system
  • aka Structural Based Testing

Statement Coverage

edit
  • Test that all the statements in a program execute correctly
  • Classed as 'covering all the boxes' when looking at code in a flow chart

Decision Coverage

edit
Other Techniques
edit

Condition Testing

Multi Condition Testing

Modified Condition Coverage