KPI-driven code analysis: Difference between revisions

Content deleted Content added
Added tags to the page using Page Curation (unreferenced)
LuvkyStar (talk | contribs)
Added short description
Tags: Mobile edit Mobile app edit Android app edit App suggested edit App description add
 
(2 intermediate revisions by 2 users not shown)
Line 1:
{{Short description|IT Systems Software Code}}
{{unreferenced|date=January 2014}}
 
Line 19 ⟶ 20:
 
* [[Revision Control]], also known as [[version control]]. In this system every step of each individual developer is tracked for the entire life cycle of the software system. The data describes: “Which developer changed what when.” This data provides a basis for answering the question, “What effort or development cost has been invested in which areas of code?” Prominent revision control systems are [[Subversion]], [[Git (software)|Git]], [[Perforce]], [[Mercurial]], [[Synergy]], [[ClearCase]], …
 
* Software Test Systems. These provide a read-out as to which parts of the source code have already been tested. With this information, it becomes obvious where there are gaps in testing, possibly even where these gaps were intentionally left (due to the significant cost and effort involved in setting up tests).
 
* [[Bug Tracking System]]s ([[Bug Tracker]]). This information can be used in combination with the information provided by the revision control system to help draw conclusions on the error rate of particular areas of code.
 
* [[Issue tracking system]]s. The information produced by these systems, in conjunction with the information from revision control, enables conclusions to be drawn regarding development activity related to specific technical requirements. In addition, precise data on time investment can be utilized for the analysis.
 
* Performance profilers ([[Profiling (computer programming)]]). The data on the performance of the software system help to analyze which areas of code consume the most CPU resources.
 
Line 36 ⟶ 33:
** Huge, monolithic code units in which several aspects have been mixed together so that to change one aspect, changes have to be implemented at several points.
** Identification of unnecessary multi-threading. Multi-threading is an extremely large error source. The run-time behavior of multi-threading code is hard to comprehend meaning the cost and effort required for extensions or maintenance to it is correspondingly high. Thus, as a general rule, unnecessary multi-threading should be avoided.
 
* Identification of insufficient exception handling. If there are too few try-catch blocks in the code or if nothing is executed in the catch function, the consequences, if program errors arise, can be serious.
 
* Identification of which sections of source code have been altered since the last software test, i.e. where tests must be performed and where not. This information enables software tests to be planned more intelligently: new functionality can be tested more intensively or resources saved.
 
* Knowledge of how much cost and effort will be required for the development or extension of a particular software module:
** When extending existing software modules, a recommendation for action could be to undertake code refactoring.
** Any newly developed functionality can be analyzed to ascertain whether a target/performance analysis has been performed for the costs and if so why. Were the causes of the deviations from the plan identified, can measures be implemented to increase accuracy in future planning.
 
* By tracing which developer (team) produced which source code and examining the software created over a sustained period, any deficiencies can be identified as either one-off slips in quality, evidence of a need for improved employee qualification or whether the software development process requires further optimization.
 
Line 52 ⟶ 45:
 
==See also==
 
{{Portal|Software Testing}}
*[[Program analysis (computer science)]]
*[[Dynamic program analysis]]
Line 79 ⟶ 72:
* [http://www.hpi.uni-potsdam.de/doellner/publications/year/2009/825/BVD09.html Projecting Code Changes onto Execution Traces to Support Localization of Recently Introduced Bugs]
* [http://www.hpi.uni-potsdam.de/doellner/publications/year/2013/2284/KTD2013.html SyncTrace: Visual Thread-Interplay Analysis]
 
 
 
 
[[Category:Program analysis]]