== Abstraction penalty ==
High-level languages intend to provide features that standardize common tasks, permit rich debugging, and maintain architectural agnosticism; while low-level languages often produce more efficient code through [[program optimization|optimization]] for a specific [[Computer architecture|system architecture]]. ''Abstraction penalty'' is the cost that high-level programming techniques pay for being unable to optimize performance or use certain hardware because they don't take advantage of certain low-level architectural resources. High-level programming exhibits features like more generic data structures and operations, run-time interpretation, and intermediate code files; which often result in execution of far more operations than necessary, higher memory consumption, and larger binary program size.<ref>{{cite journalFor this reason, code which needs to run particularly quickly and efficiently may require the use of a lower-level language, even if a higher-level language would make the coding easier. In many cases, critical portions of a program mostly in a high-level language can be hand-coded in assembly language, leading to a much faster, more efficient, or simply reliably functioning optimised program.
|author=Surana P
|title=Meta-Compilation of Language Abstractions.
|year=2006
|url=http://lispnyc.org/meeting-assets/2007-02-13_pinku/SuranaThesis.pdf
|access-date=2008-03-17
|url-status=live
|archive-url=https://web.archive.org/web/20150217154926/http://lispnyc.org/meeting-assets/2007-02-13_pinku/SuranaThesis.pdf
|archive-date=2015-02-17
}}</ref><ref>{{cite web
| last = Kuketayev
| title = The Data Abstraction Penalty (DAP) Benchmark for Small Objects in Java.
| url = http://www.adtmag.com/joop/article.aspx?id=4597
| access-date = 2008-03-17
| archive-url = https://web.archive.org/web/20090111091710/http://www.adtmag.com/joop/article.aspx?id=4597
| archive-date = 2009-01-11
| url-status = dead
}}</ref><ref>{{Cite book
| last1 = Chatzigeorgiou
| last2 = Stephanides
| editor-last = Blieberger
| editor2-last = Strohmeier
| contribution = Evaluating Performance and Power Of Object-Oriented Vs. Procedural Programming Languages
| title = Proceedings - 7th International Conference on Reliable Software Technologies - Ada-Europe'2002
| year = 2002
| pages = 367
| publisher = Springer
}}</ref> For this reason, code which needs to run particularly quickly and efficiently may require the use of a lower-level language, even if a higher-level language would make the coding easier. In many cases, critical portions of a program mostly in a high-level language can be hand-coded in [[assembly language]], leading to a much faster, more efficient, or simply reliably functioning [[Program optimisation|optimised program]].
However, with the growing complexity of modern [[microprocessor]] architectures, well-designed compilers for high-level languages frequently produce code comparable in efficiency to what most low-level programmers can produce by hand, and the higher abstraction may allow for more powerful techniques providing better overall results than their low-level counterparts in particular settings.<ref>
High-level languages are designed independent of a specific computing [[Computer architecture|system architecture ]]. This facilitates executing a program written in such a language on any computing system with compatible support for the Interpreted or [[Just-in-time compilation|JIT ]] program. High-level languages can be improved as their designers develop improvements. In other cases, new high-level languages evolve from one or more others with the goal of aggregating the most popular constructs with new or improved features. An example of this is [[Scala (programming language)|Scala]] which maintains backward compatibility with [[Java (programming language)|Java ]], meaning that programs and libraries written in Java will continue to be usable even if a programming shop switches to Scala; this makes the transition easier and the lifespan of such high-level coding indefinite. In contrast, low-level programs rarely survive beyond the [[Computer architecture|system architecture]] which they were written for without major revision. This is the engineering 'trade-off' for the 'Abstraction Penalty'. ▼
{{Cite journal
|author1=Manuel Carro |author2=José F. Morales |author3=Henk L. Muller |author4=G. Puebla |author5=M. Hermenegildo | journal = Proceedings of the 2006 International Conference on Compilers, Architecture and Synthesis for Embedded Systems
| title = High-level languages for small devices: a case study
| url = http://www.clip.dia.fi.upm.es/papers/carro06:stream_interpreter_cases.pdf
| year = 2006
| publisher = ACM
}}</ref>
▲High-level languages are designed independent of a specific computing [[Computer architecture|system architecture]]. This facilitates executing a program written in such a language on any computing system with compatible support for the Interpreted or [[Just-in-time compilation|JIT]] program. High-level languages can be improved as their designers develop improvements. In other cases, new high-level languages evolve from one or more others with the goal of aggregating the most popular constructs with new or improved features. An example of this is [[Scala (programming language)|Scala]] which maintains backward compatibility with [[Java (programming language)|Java]], meaning that programs and libraries written in Java will continue to be usable even if a programming shop switches to Scala; this makes the transition easier and the lifespan of such high-level coding indefinite. In contrast, low-level programs rarely survive beyond the [[Computer architecture|system architecture]] which they were written for without major revision. This is the engineering 'trade-off' for the 'Abstraction Penalty'.
== Relative meaning ==
|