Program optimization: Difference between revisions

Content deleted Content added
No edit summary
m Reverted 1 edit by 2001:8003:B05C:FD00:5D26:402:8D51:86B4 (talk) to last revision by Mortense
 
(9 intermediate revisions by 4 users not shown)
Line 7:
 
==Overview==
Although the term "optimization" is derived from "optimum",<ref>{{Cite book |last1=Antoniou |first1=Andreas |url=https://link.springer.com/content/pdf/10.1007/978-1-0716-0843-2.pdf |title=Practical Optimization |last2=Lu |first2=Wu-Sheng |series=Texts in Computer Science |publisher=[[Springer Publishing|Springer]] |year=2021 |edition=2nd |pages=1 |doi=10.1007/978-1-0716-0843-2 |isbn=978-1-0716-0841-8 |language=en}}</ref> achieving a truly optimal system is rare in practice, which is referred to as [[superoptimization]]. Optimization typically focuses on improving a system with respect to a specific quality metric rather than making it universally optimal. This often leads to trade-offs, where enhancing one metric may come at the expense of another. One popularfrequently cited example is the [[space-time tradeoff]], where reducing a program’s execution time bycan increasingincrease its memory consumption. Conversely, in scenarios where memory is limited, engineers might prioritize a slower [[algorithm]] to conserve space. There is rarely a single design that can excel in all situations, requiring [[software engineers|programmers]] to prioritize attributes most relevant to the application at hand. Metrics for software include throughput, [[Frames per second|latency]], [[RAM|volatile memory usage]], [[Disk storage|persistent storage]], [[internet usage]], [[energy consumption]], and hardware [[wear and tear]]. The most common metric is speed.
 
Furthermore, achieving absolute optimization often demands disproportionate effort relative to the benefits gained. Consequently, optimization processes usually stopslow once sufficient improvements are achieved, without striving for perfection. Fortunately, significant gains often occur early in the optimization process, making it practical to stop before reaching [[diminishing returns]].
 
==Levels of optimization==
Line 35:
 
===Compile level===
Use of an [[optimizing compiler]] with optimizations enabled tends to ensure that the [[executable program]] is optimized at least as much as the compiler can predictreasonable perform. See [[Optimizing compiler]] for more details.
 
===Assembly level===
Line 101:
<!-- This section is linked from [[Python (programming language)]] -->
 
Typically, optimization involves choosing the best overall algorithms and data structures. <ref>{{cite web|url=https://ubiquity.acm.org/article.cfm?id=1513451|title=The Fallacy of Premature Optimization}}</ref> Frequently, algorithmic improvements can cause performance improvements of several orders of magnitude instead of micro-optimizations, which rarely improve performance by more than a few percent. <ref>{{cite web|url=https://ubiquity.acm.org/article.cfm?id=1513451|title=The Fallacy of Premature Optimization}}</ref> If one waits to optimize until the end of the development cycle, then changing the algorithm requires a complete rewrite.
Optimization can reduce [[readability]] and add code that is used only to improve the [[Computer performance|performance]]. This may complicate programs or systems, making them harder to maintain and debug. As a result, optimization or performance tuning is often performed at the end of the [[development stage]].
 
Frequently, micro-optimization can reduce [[readability]] and complicate programs or systems. That can make programs more difficult to maintain and debug.
 
[[Donald Knuth]] made the following two statements on optimization:
Line 109 ⟶ 111:
<blockquote> "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"<ref name="autogenerated268"/></blockquote>
 
"Premature optimization" is often used as a rallying cry against all optimization in all situations for all purposes. <ref>{{cite web|url=https://ubiquity.acm.org/article.cfm?id=1513451|title=The Fallacy of Premature Optimization}}</ref><ref>{{cite web|url=https://www.javacodegeeks.com/2012/11/not-all-optimization-is-premature.html|title=Not All Optimization is Premature}}</ref><ref>{{cite web|url=https://www.infoworld.com/article/2165382/when-premature-optimization-isn-t.html|title=When Premature Optimization Is'nt}}</ref><ref>{{cite web|url=https://prog21.dadgum.com/106.html|title="Avoid Premature Optimization" Does Not Mean "Write Dump Code"}}</ref> Frequently, [[SOLID|Clean Code]] causes code to be more complicated than simpler more efficient code. <ref>{{cite web|url=https://devshift.substack.com/p/premature-abstractions|title=Premature Abstractions}}</ref>
"Premature optimization" is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code. This can result in a design that is not as clean as it could have been or code that is incorrect, because the code is complicated by the optimization and the programmer is distracted by optimizing.
 
When deciding whetherwhat to optimize a specific part of the program, [[Amdahl's Law]] should always be considered:used theto impactproritize on the overall program depends veryparts muchbased on howthe muchactual time is actually spent in thata specificcertain part, which is not always clear from looking at the code without a [[Profiling (computer programming)|performance analysis]].
 
In practice, it is often necessary to keep performance goals in mind when first designing software, butyet theprogrammers programmermust balancesbalance thevarious goalstradeoffs. ofDevelopment designcost is significant, and optimizationhardware is fast.
A better approach is therefore to design first, code from the design and then [[profiling (computer programming)|profile]]/[[Benchmark (computing)|benchmark]] the resulting code to see which parts should be optimized. A simple and elegant design <!-- how is this produced, if not prematurely? -->is often easier to optimize at this stage, and profiling may reveal unexpected performance problems that would not have been addressed by premature optimization.
 
Modern compilers are efficient enough that the intended performance increases sometimes fail to materialize. Since compilers perform many automatic optimizations, some optimizations may yield an identical executable. Also, sometimes hardware may reduce the impact of micro-optimization. For example, hardware may cache data that is cached at a software level.
In practice, it is often necessary to keep performance goals in mind when first designing software, but the programmer balances the goals of design and optimization.
 
Modern compilers and operating systems are so efficient that the intended performance increases often fail to materialize. As an example, caching data at the application level that is again cached at the operating system level does not yield improvements in execution. Even so, it is a rare case when the programmer will remove failed optimizations from production code. It is also true that advances in hardware will more often than not obviate any potential improvements, yet the obscuring code will persist into the future long after its purpose has been negated.
 
==Macros==
Line 168:
 
In particular, for [[just-in-time compiler]]s the performance of the [[Run time environment|run time]] compile component, executing together with its target code, is the key to improving overall execution speed.
 
==False optimization==
 
Sometimes, "optimizations" may hurt performance. Parallelism and concurrency causes a significant overhead performance cost, especially energy usage. Keep in mind that C code rarely uses explicit multiprocessing, yet it typically runs faster than any other programming language. Disk caching, paging, and swapping often cause significant increases to energy usage and hardware wear and tear. Running processes in the background to improve startup time slows down all other processes.
 
==See also==