Sometimes, UAT is performed by the customer, in their environment and on their own hardware.
OAT is used to conduct operational readiness (pre-release) of a product, service or system as part of a [[quality management system]]. OAT is a common type of non-functional software testing, used mainly in [[software development]] and [[software maintenance]] projects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or [[Operationsoperations readiness and assurance]] (OR&A) testing. [[Functional testing]] within OAT is limited to those tests that are required to verify the ''non-functional'' aspects of the system.
In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.<ref>{{Cite web |last=Woods |first=Anthony J. |date=June 5, 2015 |title=Operational Acceptance – an application of the ISO 29119 Software Testing standard |url=https://www.scribd.com/document/257086897/Operational-Acceptance-Test-White-Paper-2015-Capgemini |access-date=January 9, 2018 |publisher=Capgemini Australia |type=Whitepaper}}</ref>
* Ability to use the system using the computer keyboard in addition to the mouse.
''==== Common Standardsstandards for compliance'' ====
* [[Americans with Disabilities Act of 1990]]
* [[Section 508 Amendment to the Rehabilitation Act of 1973]]
{{Main|Development testing}}
Development Testingtesting is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development Testingtesting aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process.
Depending on the organization's expectations for software development, Developmentdevelopment Testingtesting might include [[static code analysis]], data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, [[Requirements traceability|traceability]], and other software testing practices.
=== A/B testing ===
The sample below is common for waterfall development. The same activities are commonly found in other development models, but might be described differently.
* [[Requirements analysis]]: Testingtesting should begin in the requirements phase of the [[software development life cycle]]. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work.
* Test planning: [[Testtest strategy]], [[test plan]], [[testbed]] creation. Since many activities will be carried out during testing, a plan is needed.
* Test development: Testtest procedures, [[Scenario test|test scenarios]], [[test case]]s, test datasets, test scripts to use in testing software.
* Test execution: Testerstesters execute the software based on the plans and test documents then report any errors found to the development team. This part could be complex when running tests with a lack of programming knowledge.
* Test reporting: Onceonce testing is completed, testers generate metrics and make final reports on their [[test effort]] and whether or not the software tested is ready for release.
* Test result analysis: Oror Defect''defect Analysisanalysis'', is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
* Defect Retestingretesting: Onceonce a defect has been dealt with by the development team, it is retested by the testing team.
* [[Regression testing]]: Itit is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything and that the software product as a whole is still working correctly.
* Test Closureclosure: Onceonce the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.
== Quality ==
|