Content deleted Content added
No edit summary |
|||
Line 12:
==Levels of optimization==
Optimization can occur at a number of levels. Typically the higher levels have greater impact, and are harder to change later on in a project, requiring significant changes or a complete rewrite if they need to be changed. Thus optimization can typically proceed via refinement from higher to lower, with initial gains being larger and achieved with less work, and later gains being smaller and requiring more work. However, in some cases overall performance depends on performance of very low-level portions of a program, and small changes at a late stage or early consideration of low-level details can have outsized impact. Typically some consideration is given to efficiency throughout a project{{snd}} though this varies significantly{{snd}} but major optimization is often considered a refinement to be done late, if ever. On longer-running projects there are typically cycles of optimization, where improving one area reveals limitations in another, and these are typically curtailed when performance is acceptable or gains become too small or costly. Best practices for optimization during iterative development cycles include continuous monitoring for performance issues coupled with regular performance testing.<ref>{{cite web |title= Performance Optimization in Software Development: Speeding Up Your Applications|url=https://senlainc.com/blog/performance-optimization-in-software-development/#best-practices-for-performance-optimization |access-date=12 July 2025}}</ref><ref>{{cite web |author=Agrawal, Amit |title= Maximizing Efficiency: Implementing a Performance Monitoring System |url=https://www.developers.dev/tech-talk/implement-a-system-for-monitoring-application.html |access-date=12 July 2025}}</ref>
As performance is part of the specification of a program{{snd}} a program that is unusably slow is not fit for purpose: a video game with 60 Hz (frames-per-second) is acceptable, but 6 frames-per-second is unacceptably choppy{{snd}} performance is a consideration from the start, to ensure that the system is able to deliver sufficient performance, and early prototypes need to have roughly acceptable performance for there to be confidence that the final system will (with optimization) achieve acceptable performance. This is sometimes omitted in the belief that optimization can always be done later, resulting in prototype systems that are far too slow{{snd}} often by an [[order of magnitude]] or more{{snd}} and systems that ultimately are failures because they architecturally cannot achieve their performance goals, such as the [[Intel 432]] (1981); or ones that take years of work to achieve acceptable performance, such as Java (1995), which
===Design level===
Line 20:
===Algorithms and data structures===
Given an overall design, a good choice of [[algorithmic efficiency|efficient algorithms]] and [[data structure]]s, and efficient implementation of these algorithms and data structures comes next. After design, the choice of [[algorithm]]s and data structures affects efficiency more than any other aspect of the program. Generally data structures are more difficult to change than algorithms, as a data structure assumption and its performance assumptions are used throughout the program, though this can be minimized by the use of [[abstract data type]]s in function definitions, and keeping the concrete data structure definitions restricted to a few places. Changes in data structures mapped to a database may require schema migration and other complex software or infrastructure changes.<ref>{{cite web |author=Mullins, Craig S. |title=The Impact of Change on Database Structures |url=https://www.dbta.com/Columns/DBA-Corner/The-Impact-of-Change-on-Database-Structures-101931.aspx |access-date=12 July 2025}}</ref>
For algorithms, this primarily consists of ensuring that algorithms are constant O(1), logarithmic O(log ''n''), linear O(''n''), or in some cases log-linear O(''n'' log ''n'') in the input (both in space and time). Algorithms with quadratic complexity O(''n''<sup>2</sup>) fail to scale, and even linear algorithms cause problems if repeatedly called, and are typically replaced with constant or logarithmic if possible.
Line 92:
In computer science, resource consumption often follows a form of [[power law]] distribution, and the [[Pareto principle]] can be applied to resource optimization by observing that 80% of the resources are typically used by 20% of the operations.<ref>{{cite book | last = Wescott | first = Bob | title = The Every Computer Performance Book, Chapter 3: Useful laws | publisher = [[CreateSpace]] | date = 2013 | isbn = 978-1482657753}}</ref> In software engineering, it is often a better approximation that 90% of the execution time of a computer program is spent executing 10% of the code (known as the 90/10 law in this context).
More complex algorithms and data structures perform well with many items, while simple algorithms are more suitable for small amounts of data — the setup, initialization time, and constant factors of the more complex algorithm can outweigh the benefit, and thus a [[hybrid algorithm]] or [[adaptive algorithm]] may be faster than any single algorithm. A performance profiler can be used to narrow down decisions about which functionality fits which conditions.<ref>{{cite web |url=http://www.developforperformance.com/PerformanceProfilingWithAFocus.html#FittingTheSituation |author=Krauss, Kirk J. |title=Performance Profiling with a Focus |access-date=15 August 2017}}</ref>
Performance profiling therefore provides not only bottleneck detection but rather a variety of methods for optimization guidance. [[Empirical algorithmics]] is the practice of using empirical methods, typically performance profiling, to study the behavior of algorithms, for developer understanding that may lead to human-planned optimizations. [[Profile-guided optimization]] is the machine-driven use of profiling data as input to an optimizing compiler or interpreter. Some programming languages are associated with tools for profile-guided optimization.<ref>{{cite web |url=https://doc.rust-lang.org/beta/rustc/profile-guided-optimization.html |title=Profile-guided Optimization |access-date=12 July 2025}}</ref> Some performance profiling methods emphasize enhancements based on [[cache (computing)|cache]] utilization.<ref>{{Cite book |last=The Valgrind Developers |url=https://www.cs.cmu.edu/afs/cs.cmu.edu/project/cmt-40/Nice/RuleRefinement/bin/valgrind-3.2.0/docs/html/cl-manual.html#cl-manual.tools |title=Valgrind User Manual |section=5.2.2 |publisher=Network Theory Ltd. |year=2006 |language=en}}</ref> Other benefits of performance profiling may include improved resource management and an enhanced user experience.<ref>{{cite web |author= Kodlekere, Ranjana |title= Performance Profiling: Explained with Stages| url=https://testsigma.com/blog/performance-profiling/#benefits-of-performance-profiling |access-date=12 July 2025}}</ref>
In some cases, adding more [[main memory|memory]] can help to make a program run faster. For example, a filtering program will commonly read each line and filter and output that line immediately. This only uses enough memory for one line, but performance is typically poor, due to the latency of each disk read. Caching the result is similarly effective, though also requiring larger memory use.
Line 166 ⟶ 168:
In particular, for [[just-in-time compiler]]s the performance of the [[Run time environment|run time]] compile component, executing together with its target code, is the key to improving overall execution speed.
==See also==
<!-- Please keep entries in alphabetical order & add a short description {{annotated link|WP:SEEALSO}} -->
{{div col|small=yes|colwidth=20em}}
* {{annotated link|Benchmark (computing)|Benchmark}}
* {{annotated link|Cache (computing)}}
* {{annotated link|Empirical algorithmics}}
* {{annotated link|Optimizing compiler}}
* {{annotated link|Performance engineering}}
* {{annotated link|Performance prediction}}
* {{annotated link|Performance tuning}}
* {{annotated link|Profile-guided optimization}}
* {{annotated link|Software development}}
* {{annotated link|Software performance testing}}
* {{annotated link|Static code analysis}}
{{div col end}}
<!-- please keep entries in alphabetical order -->
==References==
|