Content deleted Content added
Aphelocoma (talk | contribs) Fixed inappropriate use of "you" |
Citation bot (talk | contribs) Added bibcode. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 506/967 |
||
(24 intermediate revisions by 16 users not shown) | |||
Line 1:
{{Short description|Process in software development}}
In [[software development]], '''effort estimation''' is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain [[software]] based on incomplete, uncertain and noisy input. Effort [[estimation|estimates]] may be used as input to project plans, iteration plans, budgets, investment analyses, pricing processes and bidding rounds.<ref>{{cite web | url=http://www.infoq.com/articles/software-development-effort-estimation | title=What We do and Don't Know about Software Development Effort Estimation}}</ref><ref>{{cite web|title=Cost Estimating And Assessment Guide GAO-09-3SP Best Practices for developing and managing Capital Program Costs|date=2009|publisher=US Government Accountability Office|url=https://www.gao.gov/new.items/d093sp.pdf }}</ref>
Line 24 ⟶ 25:
| s2cid = 15471986
}}</ref> However, the measurement of estimation error is problematic, see [[#Assessing the accuracy of estimates|Assessing the accuracy of estimates]].
The strong overconfidence in the accuracy of the effort estimates is illustrated by the finding that, on average, if a software professional is 90% confident or
| author = Jørgensen, M. Teigen, K.H. Ribu, K.
| title = Better sure than safe? Over-confidence in judgement based software development effort prediction intervals
Line 34 ⟶ 35:
| pages=79–93}}</ref>
Currently the term
==History==
Line 53 ⟶ 54:
}}</ref> and Nelson.<ref>Nelson, E. A. (1966). Management Handbook for the Estimation of Computer Programming Costs. AD-A648750, Systems Development Corp.</ref>
Most of the research has focused on the construction of formal software effort estimation models. The early models were typically based on [[regression analysis]] or mathematically derived from theories from other domains. Since then a high number of model building approaches have been evaluated, such as approaches founded on [[case-based reasoning]], classification and [[regression trees]], [[simulation]], [[neural networks]], [[Bayesian statistics]], [[lexical analysis]] of requirement specifications, [[genetic programming]], [[linear programming]], economic production models, [[soft computing]], [[fuzzy logic]] modeling, statistical [[bootstrapping]], and combinations of two or more of these models. The perhaps most common estimation methods today are the parametric estimation models [[COCOMO]], [[SEER-SEM]] and SLIM. They have their basis in estimation research conducted in the 1970s and 1980s and are since then updated with new calibration data, with the last major release being COCOMO II in the year 2000. The estimation approaches based on functionality-based size measures, e.g., [[function points]], is also based on research conducted in the 1970s and 1980s, but are re-calibrated with modified size measures and different counting approaches, such as the [[Use Case Points|use case points]]<ref>{{cite
| author = Anda, B. Angelvik, E. Ribu, K.
| title =
| chapter = Improving Estimation Practices by Applying Use Case Models
| doi=10.1007/3-540-36209-6_32▼
| year=2002
▲ | journal=Lecture Notes in Computer Science
| volume = 2559
| pages=383–397
| isbn = 978-3-540-00234-5
| citeseerx = 10.1.1.546.112
}} {{isbn|9783540002345|9783540362098}}.</ref> or [[object point]]s and [[COSMIC_functional_size_measurement|COSMIC Function Points]] in the 1990s.
==Estimation approaches==
There are many ways of categorizing estimation approaches, see for example.<ref>Briand, L. C. and Wieczorek, I. (2002). "Resource estimation in software engineering". ''Encyclopedia of software engineering''. J. J. Marcinak. New York, John Wiley & Sons:
| author = Jørgensen, M. Shepperd, M.
| title = A Systematic Review of Software Development Cost Estimation Studies
| url = http://simula.no/research/engineering/publications/Jorgensen.2007.1 }}</ref> The top level categories are the following:
* Expert estimation: The quantification step, i.e., the step where the estimate is produced based on judgmental processes.<ref>{{cite web | url=http://www.oxagile.com/services/custom-software-design-and-development/ | title=Custom Software Development Services
* Formal estimation model: The quantification step is based on mechanical processes, e.g., the use of a formula derived from historical data.
* Combination-based estimation: The quantification step is based on a judgmental and mechanical combination of estimates from different sources.
Line 94 ⟶ 96:
| [[COCOMO]], [[Putnam model|SLIM]], [[SEER-SEM]], [[TruePlanning for Software]]
|-
| Size-based estimation models<ref>Hill Peter (ISBSG)
| Formal estimation model
| [[Function Point Analysis]],<ref>Morris Pam — Overview of Function Point Analysis [http://www.totalmetrics.com/function-point-resources/what-are-function-points Total Metrics - Function Point Resource Centre]</ref> [[Use Case]] Analysis, [[Use Case Points]], SSU (Software Size Unit), [[Story point]]s-based estimation in [[Agile software development]], [[Object point|Object Points]]
Line 104 ⟶ 106:
| Mechanical combination
| Combination-based estimation
| Average of an analogy-based and a [[Work breakdown structure]]-based effort estimate<ref>Srinivasa Gopal and Meenakshi D'Souza. 2012. Improving estimation accuracy by using case based reasoning and a combined estimation approach. In ''Proceedings of the 5th India Software Engineering Conference'' (ISEC '12). ACM, New York
|-
| Judgmental combination
Line 112 ⟶ 114:
==Selection of estimation approaches==
The evidence on differences in estimation accuracy of different estimation approaches and models suggest that there is no
.<ref>{{cite journal
| author = Shepperd, M. Kadoda, G.
Line 122 ⟶ 124:
| doi = 10.1109/32.965341
| year = 2001
| bibcode = 2001ITSEn..27.1014S
| url = http://bura.brunel.ac.uk/handle/2438/1102 }}
</ref> This implies that different organizations benefit from different estimation approaches. Findings<ref name="Jørgensen, M">{{cite web
Line 129 ⟶ 132:
| url = http://simula.no/research/engineering/publications/Jorgensen.2007.2 }}</ref> that may support the selection of estimation approach based on the expected accuracy of an approach include:
* Expert estimation is on average at least as accurate as model-based effort estimation. In particular, situations with unstable relationships and information of high importance not included in the model may suggest use of expert estimation. This assumes, of course, that experts with relevant experience are available.
* Formal estimation models not tailored to a particular
* Formal estimation models may be particularly useful in situations where the model is tailored to the
The most robust finding, in many forecasting domains, is that combination of estimates from independent sources, preferable applying different approaches, will on average improve the estimation accuracy.<ref name="Jørgensen, M"/><ref>{{cite journal
Line 179 ⟶ 182:
</ref>
<ref>{{cite web
| author = [[Barbara Kitchenham|Kitchenham, B.]], Pickard, L.M., MacDonell, S.G. Shepperd
| title = What accuracy statistics really measure
| url = http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=IPSEFU000148000003000081000001&idtype=cvips&gifs=yes }}
</ref>
<ref>{{cite journal
| author = Foss, T., Stensrud, E., [[Barbara Kitchenham|Kitchenham, B.]], Myrtveit, I.
| title = A Simulation Study of the Model Evaluation Criterion MMRE
| journal = IEEE Transactions on Software Engineering
Line 192 ⟶ 195:
| url = http://portal.acm.org/citation.cfm?id=951936 | doi = 10.1109/TSE.2003.1245300
| year = 2003
|
| citeseerx = 10.1.1.101.5792
}}
</ref> and there are several alternative measures, such as more symmetric measures,<ref>{{cite journal
Line 213 ⟶ 217:
| volume = 145
| page = 29
| url = https://ieeexplore.ieee.org/document/689296 | archive-url = https://web.archive.org/web/20170920055746/http://ieeexplore.ieee.org/document/689296/ | url-status = dead | archive-date = September 20, 2017 | doi = 10.1049/ip-sen:19983370
| year = 1998
| doi-broken-date = 12 July 2025
MRE is not reliable if the individual items are skewed. PRED(25) is preferred as a measure of estimation accuracy. PRED(25) measures the percentage of predicted values that are within 25 percent of the actual value.
Line 226 ⟶ 231:
==Psychological issues==
There are many psychological factors potentially explaining the strong tendency towards over-optimistic
| author = Jørgensen, M. Grimstad, S.
| title = How to Avoid Impact from Irrelevant and Misleading Information When Estimating Software Development Effort
| journal = IEEE Software
| date = 2008
| pages = 78–83
| url = https://www.simula.no/publications/avoiding-irrelevant-and-misleading-information-when-estimating-development-effort }}
</ref>
* It's easy to estimate what is known.
Line 238 ⟶ 246:
The chronic underestimation of development effort has led to the coinage and popularity of numerous humorous adages, such as ironically referring to a task as a "[[small matter of programming]]" (when much effort is likely required), and citing laws about underestimation:
* [[Ninety–ninety rule]]:
{{
* [[Hofstadter's law]]:
{{
''Gödel, Escher, Bach: An Eternal Golden Braid''. 20th anniversary ed., 1999, p. 152. {{ISBN|0-465-02656-7}}.
</ref>
}}
* [[Brooks's law|Fred Brooks' law]]:
{{
}}
|