Program optimization: Difference between revisions

Content deleted Content added
m Overview: grammar
m Reverted 1 edit by 2001:8003:B05C:FD00:5D26:402:8D51:86B4 (talk) to last revision by Mortense
 
(3 intermediate revisions by 2 users not shown)
Line 7:
 
==Overview==
Although the term "optimization" is derived from "optimum",<ref>{{Cite book |last1=Antoniou |first1=Andreas |url=https://link.springer.com/content/pdf/10.1007/978-1-0716-0843-2.pdf |title=Practical Optimization |last2=Lu |first2=Wu-Sheng |series=Texts in Computer Science |publisher=[[Springer Publishing|Springer]] |year=2021 |edition=2nd |pages=1 |doi=10.1007/978-1-0716-0843-2 |isbn=978-1-0716-0841-8 |language=en}}</ref> achieving a truly optimal system is rare in practice, which is referred to as [[superoptimization]]. Optimization typically focuses on improving a system with respect to a specific quality metric rather than making it universally optimal. This often leads to trade-offs, where enhancing one metric may come at the expense of another. One frequently cited example is the [[space-time tradeoff]], where reducing a program’s execution time can increase its memory consumption. Conversely, in scenarios where memory is limited, engineers might prioritize a slower [[algorithm]] to conserve space. There is rarely a single design that can excel in all situations, requiring [[software engineers|programmers]] to prioritize attributes most relevant to the application at hand. Metrics for software include throughput, [[Frames per second|latency]], [[RAM|volatile memory usage]], [[Disk storage|peristantpersistent storage]], [[internet usage]], [[energy consumption]], and hardware [[wear and tear]]. The most common metric is speed.
 
Furthermore, achieving absolute optimization often demands disproportionate effort relative to the benefits gained. Consequently, optimization processes usually slow once sufficient improvements are achieved. Fortunately, significant gains often occur early in the optimization process, making it practical to stop before reaching [[diminishing returns]].
Line 98:
In some cases, adding more [[main memory|memory]] can help to make a program run faster. For example, a filtering program will commonly read each line and filter and output that line immediately. This only uses enough memory for one line, but performance is typically poor, due to the latency of each disk read. Caching the result is similarly effective, though also requiring larger memory use.
 
==When to Optimizeoptimize==
<!-- This section is linked from [[Python (programming language)]] -->
 
Line 117:
In practice, it is often necessary to keep performance goals in mind when first designing software, yet programmers must balance various tradeoffs. Development cost is significant, and hardware is fast.
 
Modern compilers are efficient enough that the intended performance increases sometimes fail to materialize. Since compilers perform many automatic optimizations, some optimizations may yield an identical executable. Also, sometimes hardware may reduce the impact of micomicro-optimization. For example, hardware may cache data that is cached at a software level.
 
==Macros==
Line 169:
In particular, for [[just-in-time compiler]]s the performance of the [[Run time environment|run time]] compile component, executing together with its target code, is the key to improving overall execution speed.
 
==False Optimizationoptimization==
 
SomtimesSometimes, "optimizations" may hurt performance. Parallelism and concurrency causes a significant overhead performance cost, especially energy usage. Keep in mind that C code rarely uses explicit multiprocessing, yet it typically runs faster than any other programming language. Disk caching, paging, and swapping often cause significant increases to energy usage and hardware wear and tear. Running processes in the background to improve startup time slows down all other processes.
 
==See also==