Content deleted Content added
m Reverted edits by 78.173.47.193 (talk) (HG) (3.4.10) |
m Reverted 1 edit by 2001:8003:B05C:FD00:5D26:402:8D51:86B4 (talk) to last revision by Mortense |
||
(36 intermediate revisions by 24 users not shown) | |||
Line 1:
{{
{{multiple issues|{{original research|date=September 2016}}
{{essay like|date=July 2017}}
{{Refimprove section|date=February 2018}}|collapsed=|section=}}
In [[computer science]], '''program optimization''', '''code optimization''', or '''software optimization'''
==
Although the term "optimization" is derived from "optimum",<ref>{{Cite book |last1=Antoniou |first1=Andreas |url=https://link.springer.com/content/pdf/10.1007/978-1-0716-0843-2.pdf |title=Practical Optimization |last2=Lu |first2=Wu-Sheng |series=Texts in Computer Science |publisher=[[Springer Publishing|Springer]] |year=2021 |edition=2nd |pages=1 |doi=10.1007/978-1-0716-0843-2 |isbn=978-1-0716-0841-8 |language=en}}</ref> achieving a truly optimal system is rare in practice, which is referred to as [[superoptimization]]. Optimization typically focuses on improving a system with respect to a specific quality metric rather than making it universally optimal. This often leads to trade-offs, where enhancing one metric may come at the expense of another. One frequently cited example is the [[space-time tradeoff]], where reducing a program’s execution time can increase its memory consumption. Conversely, in scenarios where memory is limited, engineers might prioritize a slower [[algorithm]] to conserve space. There is rarely a single design that can excel in all situations, requiring [[software engineers|programmers]] to prioritize attributes most relevant to the application at hand. Metrics for software include throughput, [[Frames per second|latency]], [[RAM|volatile memory usage]], [[Disk storage|persistent storage]], [[internet usage]], [[energy consumption]], and hardware [[wear and tear]]. The most common metric is speed.
Furthermore, achieving absolute optimization often demands disproportionate effort relative to the benefits gained. Consequently, optimization processes usually slow once sufficient improvements are achieved. Fortunately, significant gains often occur early in the optimization process, making it practical to stop before reaching [[diminishing returns]].
==Levels of optimization==
Optimization can occur at a number of levels. Typically the higher levels have greater impact, and are harder to change later on in a project, requiring significant changes or a complete rewrite if they need to be changed. Thus optimization can typically proceed via refinement from higher to lower, with initial gains being larger and achieved with less work, and later gains being smaller and requiring more work. However, in some cases overall performance depends on performance of very low-level portions of a program, and small changes at a late stage or early consideration of low-level details can have outsized impact. Typically some consideration is given to efficiency throughout a project{{snd}} though this varies significantly{{snd}} but major optimization is often considered a refinement to be done late, if ever. On longer-running projects there are typically cycles of optimization, where improving one area reveals limitations in another, and these are typically curtailed when performance is acceptable or gains become too small or costly. Best practices for optimization during iterative development cycles include continuous monitoring for performance issues coupled with regular performance testing.<ref>{{cite web |title= Performance Optimization in Software Development: Speeding Up Your Applications|url=https://senlainc.com/blog/performance-optimization-in-software-development/#best-practices-for-performance-optimization |access-date=12 July 2025}}</ref><ref>{{cite web |author=Agrawal, Amit |title= Maximizing Efficiency: Implementing a Performance Monitoring System |url=https://www.developers.dev/tech-talk/implement-a-system-for-monitoring-application.html |access-date=12 July 2025}}</ref>
As performance is part of the specification of a program{{snd}} a program that is unusably slow is not fit for purpose: a video game with 60 Hz (frames-per-second) is acceptable, but 6 frames-per-second is unacceptably choppy{{snd}} performance is a consideration from the start, to ensure that the system is able to deliver sufficient performance, and early prototypes need to have roughly acceptable performance for there to be confidence that the final system will (with optimization) achieve acceptable performance. This is sometimes omitted in the belief that optimization can always be done later, resulting in prototype systems that are far too slow{{snd}} often by an [[order of magnitude]] or more{{snd}} and systems that ultimately are failures because they architecturally cannot achieve their performance goals, such as the [[Intel 432]] (1981); or ones that take years of work to achieve acceptable performance, such as Java (1995), which
===Design level===
At the highest level, the design may be optimized to make best use of the available resources, given goals, constraints, and expected use/load. The architectural design of a system overwhelmingly affects its performance. For example, a system that is network latency-bound (where network latency is the main constraint on overall performance) would be optimized to minimize network trips, ideally making a single request (or no requests, as in a [[push protocol]]) rather than multiple roundtrips. Choice of design depends on the goals: when designing a [[compiler]], if fast compilation is the key priority, a [[one-pass compiler]] is faster than a [[multi-pass compiler]] (assuming same work), but if speed of output code is the goal, a slower multi-pass compiler fulfills the goal better, even though it takes longer itself. Choice of platform and programming language occur at this level, and changing them frequently requires a complete rewrite, though a modular system may allow rewrite of only some component{{snd}} for example, for a Python program one may rewrite performance-critical sections in C. In a distributed system, choice of architecture ([[client-server]], [[peer-to-peer]], etc.) occurs at the design level, and may be difficult to change, particularly if all components cannot be replaced in sync (e.g., old clients).
===Algorithms and data structures===
Given an overall design, a good choice of [[algorithmic efficiency|efficient algorithms]] and [[data structure]]s, and efficient implementation of these algorithms and data structures comes next. After design, the choice of [[algorithm]]s and data structures affects efficiency more than any other aspect of the program. Generally data structures are more difficult to change than algorithms, as a data structure assumption and its performance assumptions are used throughout the program, though this can be minimized by the use of [[abstract data type]]s in function definitions, and keeping the concrete data structure definitions restricted to a few places. Changes in data structures mapped to a database may require schema migration and other complex software or infrastructure changes.<ref>{{cite web |author=Mullins, Craig S. |title=The Impact of Change on Database Structures |url=https://www.dbta.com/Columns/DBA-Corner/The-Impact-of-Change-on-Database-Structures-101931.aspx |access-date=12 July 2025}}</ref>
For algorithms, this primarily consists of ensuring that algorithms are constant O(1), logarithmic O(log ''n''), linear O(''n''), or in some cases log-linear O(''n'' log ''n'') in the input (both in space and time). Algorithms with quadratic complexity O(''n''<sup>2</sup>) fail to scale, and even linear algorithms cause problems if repeatedly called, and are typically replaced with constant or logarithmic if possible.
Line 35:
===Compile level===
Use of an [[optimizing compiler]] with optimizations enabled tends to ensure that the [[executable program]] is optimized at least as much as the compiler can
===Assembly level===
Line 42:
With more modern [[optimizing compiler]]s and the greater complexity of recent [[CPU]]s, it is harder to write more efficient code than what the compiler generates, and few projects need this "ultimate" optimization step.
Much of the code written today is intended to run on as many machines as possible. As a consequence, programmers and compilers don't always take advantage of the more efficient instructions provided by newer CPUs or quirks of older models. Additionally, assembly code tuned for a particular processor without using such instructions might still be suboptimal on a different processor, expecting a different tuning of the code.
Typically today rather than writing in assembly language, programmers will use a [[disassembler]] to analyze the output of a compiler and change the high-level source code so that it can be compiled more efficiently, or understand why it is inefficient.
===Run time===
[[Just-in-time compilation|Just-in-time]] compilers can produce customized machine code based on run-time data, at the cost of compilation overhead. This technique dates to the earliest [[regular expression]] engines, and has become widespread with Java HotSpot and V8 for JavaScript. In some cases [[adaptive optimization]] may be able to perform [[
[[Profile-guided optimization]] is an ahead-of-time (AOT) compilation optimization technique based on run time profiles, and is similar to a static "average case" analog of the dynamic technique of adaptive optimization.
Line 53:
[[Self-modifying code]] can alter itself in response to run time conditions in order to optimize code; this was more common in assembly language programs.
Some [[CPU design]]s can perform some optimizations at run time. Some examples include [[
===Platform dependent and independent optimizations===
Line 92:
In computer science, resource consumption often follows a form of [[power law]] distribution, and the [[Pareto principle]] can be applied to resource optimization by observing that 80% of the resources are typically used by 20% of the operations.<ref>{{cite book | last = Wescott | first = Bob | title = The Every Computer Performance Book, Chapter 3: Useful laws | publisher = [[CreateSpace]] | date = 2013 | isbn = 978-1482657753}}</ref> In software engineering, it is often a better approximation that 90% of the execution time of a computer program is spent executing 10% of the code (known as the 90/10 law in this context).
More complex algorithms and data structures perform well with many items, while simple algorithms are more suitable for small amounts of data — the setup, initialization time, and constant factors of the more complex algorithm can outweigh the benefit, and thus a [[hybrid algorithm]] or [[adaptive algorithm]] may be faster than any single algorithm. A performance profiler can be used to narrow down decisions about which functionality fits which conditions.<ref>{{cite web |url=http://www.developforperformance.com/PerformanceProfilingWithAFocus.html#FittingTheSituation |author=Krauss, Kirk J. |title=Performance Profiling with a Focus |access-date=15 August 2017}}</ref>
Performance profiling therefore provides not only bottleneck detection but rather a variety of methods for optimization guidance. [[Empirical algorithmics]] is the practice of using empirical methods, typically performance profiling, to study the behavior of algorithms, for developer understanding that may lead to human-planned optimizations. [[Profile-guided optimization]] is the machine-driven use of profiling data as input to an optimizing compiler or interpreter. Some programming languages are associated with tools for profile-guided optimization.<ref>{{cite web |url=https://doc.rust-lang.org/beta/rustc/profile-guided-optimization.html |title=Profile-guided Optimization |access-date=12 July 2025}}</ref> Some performance profiling methods emphasize enhancements based on [[cache (computing)|cache]] utilization.<ref>{{Cite book |last=The Valgrind Developers |url=https://www.cs.cmu.edu/afs/cs.cmu.edu/project/cmt-40/Nice/RuleRefinement/bin/valgrind-3.2.0/docs/html/cl-manual.html#cl-manual.tools |title=Valgrind User Manual |section=5.2.2 |publisher=Network Theory Ltd. |year=2006 |language=en}}</ref> Other benefits of performance profiling may include improved resource management and an enhanced user experience.<ref>{{cite web |author= Kodlekere, Ranjana |title= Performance Profiling: Explained with Stages| url=https://testsigma.com/blog/performance-profiling/#benefits-of-performance-profiling |access-date=12 July 2025}}</ref>
In some cases, adding more [[main memory|memory]] can help to make a program run faster. For example, a filtering program will commonly read each line and filter and output that line immediately. This only uses enough memory for one line, but performance is typically poor, due to the latency of each disk read. Caching the result is similarly effective, though also requiring larger memory use.
Line 99 ⟶ 101:
<!-- This section is linked from [[Python (programming language)]] -->
Typically, optimization involves choosing the best overall algorithms and data structures. <ref>{{cite web|url=https://ubiquity.acm.org/article.cfm?id=1513451|title=The Fallacy of Premature Optimization}}</ref> Frequently, algorithmic improvements can cause performance improvements of several orders of magnitude instead of micro-optimizations, which rarely improve performance by more than a few percent. <ref>{{cite web|url=https://ubiquity.acm.org/article.cfm?id=1513451|title=The Fallacy of Premature Optimization}}</ref> If one waits to optimize until the end of the development cycle, then changing the algorithm requires a complete rewrite.
Frequently, micro-optimization can reduce [[readability]] and complicate programs or systems. That can make programs more difficult to maintain and debug.
[[Donald Knuth]] made the following two statements on optimization:
<blockquote>"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%"<ref name="autogenerated268">{{cite journal | last = Knuth | first = Donald | citeseerx = 10.1.1.103.6084 | title = Structured Programming with go to Statements | journal = ACM Computing Surveys | volume = 6 | issue = 4 |date=December 1974 | page = 268 | doi = 10.1145/356635.356640 | s2cid = 207630080 }}</ref></blockquote>
<blockquote> "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"<ref name="autogenerated268"/></blockquote>
"Premature optimization" is often used as a rallying cry against all optimization in all situations for all purposes. <ref>{{cite web|url=https://ubiquity.acm.org/article.cfm?id=1513451|title=The Fallacy of Premature Optimization}}</ref><ref>{{cite web|url=https://www.javacodegeeks.com/2012/11/not-all-optimization-is-premature.html|title=Not All Optimization is Premature}}</ref><ref>{{cite web|url=https://www.infoworld.com/article/2165382/when-premature-optimization-isn-t.html|title=When Premature Optimization Is'nt}}</ref><ref>{{cite web|url=https://prog21.dadgum.com/106.html|title="Avoid Premature Optimization" Does Not Mean "Write Dump Code"}}</ref> Frequently, [[SOLID|Clean Code]] causes code to be more complicated than simpler more efficient code. <ref>{{cite web|url=https://devshift.substack.com/p/premature-abstractions|title=Premature Abstractions}}</ref>
When deciding
In practice, it is often necessary to keep performance goals in mind when first designing software,
Modern compilers are efficient enough that the intended performance increases sometimes fail to materialize. Since compilers perform many automatic optimizations, some optimizations may yield an identical executable. Also, sometimes hardware may reduce the impact of micro-optimization. For example, hardware may cache data that is cached at a software level.
▲In practice, it is often necessary to keep performance goals in mind when first designing software, but the programmer balances the goals of design and optimization.
==Macros==
Line 122 ⟶ 124:
In some procedural languages, such as [[C (programming language)|C]] and [[C++]], macros are implemented using token substitution. Nowadays, [[inline function]]s can be used as a [[type safe]] alternative in many cases. In both cases, the inlined function body can then undergo further compile-time optimizations by the compiler, including [[constant folding]], which may move some computations to compile time.
In many [[functional programming]] languages, macros are implemented using parse-time substitution of parse trees/abstract syntax trees, which it is claimed makes them safer to use. Since in many cases interpretation is used, that is one way to ensure that such computations are only performed at parse-time, and sometimes the only way.
[[Lisp programming language|Lisp]] originated this style of macro,{{Citation needed|date=September 2008}} and such macros are often called "Lisp-like macros
In both cases, work is moved to compile-time. The difference between [[C (programming language)|C]] macros on one side, and Lisp-like macros and [[C++]] [[template metaprogramming]] on the other side, is that the latter tools allow performing arbitrary computations at compile-time/parse-time, while expansion of [[C (programming language)|C]] macros does not perform any computation, and relies on the optimizer ability to perform it. Additionally, [[C (programming language)|C]] macros do not directly support [[recursion (computer science)|recursion]] or [[iteration]], so are not [[Turing complete]].
Line 136 ⟶ 138:
Optimization can be automated by compilers or performed by programmers. Gains are usually limited for local optimization, and larger for global optimizations. Usually, the most powerful optimization is to find a superior [[algorithm]].
Optimizing a whole system is usually undertaken by programmers because it is too complex for automated optimizers. In this situation, programmers or [[system
Use a [[Profiler (computer science)|profiler]] (or [[Profiling (computer programming)|performance analyzer]]) to find the sections of the program that are taking the most resources{{snd}} the ''bottleneck''. Programmers sometimes believe they have a clear idea of where the bottleneck is, but intuition is frequently wrong.{{citation needed|date=May 2012}} Optimizing an unimportant piece of code will typically do little to help the overall performance.
Line 166 ⟶ 168:
In particular, for [[just-in-time compiler]]s the performance of the [[Run time environment|run time]] compile component, executing together with its target code, is the key to improving overall execution speed.
==False optimization==
Sometimes, "optimizations" may hurt performance. Parallelism and concurrency causes a significant overhead performance cost, especially energy usage. Keep in mind that C code rarely uses explicit multiprocessing, yet it typically runs faster than any other programming language. Disk caching, paging, and swapping often cause significant increases to energy usage and hardware wear and tear. Running processes in the background to improve startup time slows down all other processes.
==See also==
<!-- Please keep entries in alphabetical order & add a short description {{annotated link|WP:SEEALSO}} -->
{{div col|small=yes|colwidth=20em}}
* {{annotated link|Benchmark (computing)|Benchmark}}
* {{annotated link|Cache (computing)}}
* {{annotated link|Empirical algorithmics}}
* {{annotated link|Optimizing compiler}}
* {{annotated link|Performance engineering}}
* {{annotated link|Performance prediction}}
* {{annotated link|Performance tuning}}
* {{annotated link|Profile-guided optimization}}
* {{annotated link|Software development}}
* {{annotated link|Software performance testing}}
* {{annotated link|Static code analysis}}
{{div col end}}
<!-- please keep entries in alphabetical order -->
==References==
* [[Jon Bentley (computer scientist)|Jon Bentley]]: ''Writing Efficient Programs'', {{ISBN|0-13-970251-2}}.▼
* [[Donald Knuth]]: ''[[The Art of Computer Programming]]''▼
{{Reflist}}
==
{{wikibooks|Optimizing Code for Speed}}
▲* [[Jon Bentley (computer scientist)|Jon Bentley]]: ''Writing Efficient Programs'', {{ISBN|0-13-970251-2}}.
▲* [[Donald Knuth]]: ''[[The Art of Computer Programming]]''
* [http://www.ece.cmu.edu/~franzf/papers/gttse07.pdf How To Write Fast Numerical Code: A Small Introduction]
* [http://people.redhat.com/drepper/cpumemory.pdf "What Every Programmer Should Know About Memory"] by Ulrich Drepper{{snd}} explains the structure of modern memory subsystems and suggests how to utilize them efficiently
Line 182 ⟶ 203:
* [http://www.new-npac.org/projects/cdroms/cewes-1999-06-vol1/nhse/hpccsurvey/orgs/sgi/bentley.html Writing efficient programs ("Bentley's Rules")] by [[Jon Bentley (computer scientist)|Jon Bentley]]
* [http://queue.acm.org/detail.cfm?id=1117403 "Performance Anti-Patterns"] by Bart Smaalders
{{Compiler optimizations}}
{{DEFAULTSORT:Program Optimization}}
[[Category:Software optimization|*]]
|