Self-modifying code: Difference between revisions

Content deleted Content added
Operating systems: Add a sentence about self-modifying code in the Linux kernel
That's not an example of a place where self-modified code is used, it's an implementation issue with self-modified code.
Line 158:
Regardless, at a [[meta-level]], programs can still modify their own behavior by changing data stored elsewhere (see [[metaprogramming]]) or via use of [[type polymorphism|polymorphism]].
 
==={{anchor|Synthesis}}Massalin's Synthesis kernel===
===Interaction of cache and self-modifying code===
The Synthesis [[kernel (computer science)|kernel]] presented in [[Alexia Massalin]]'s [[Doctor of Philosophy|Ph.D.]] thesis<ref name="Massalin_1992_Synthesis"/><ref name="Henson_2008"/> is a tiny [[Unix]] kernel that takes a [[structured programming|structured]], or even [[object-oriented programming|object oriented]], approach to self-modifying code, where code is created for individual [[quaject]]s, like filehandles. Generating code for specific tasks allows the Synthesis kernel to (as a JIT interpreter might) apply a number of [[Compiler optimization|optimization]]s such as [[constant folding]] or [[common subexpression elimination]].<!-- Anyone want to go read the thesis and see what other optimizations Massalin lists? -->
 
The Synthesis kernel was very fast, but was written entirely in assembly. The resulting lack of portability has prevented Massalin's optimization ideas from being adopted by any production kernel. However, the structure of the techniques suggests that they could be captured by a higher level [[programming language|language]], albeit one more complex than existing mid-level languages. Such a language and compiler could allow development of faster operating systems and applications.
 
[[Paul Haeberli]] and Bruce Karsh have objected to the "marginalization" of self-modifying code, and optimization in general, in favor of reduced development costs.<ref name="Haeberli_1994_GraficaObscura"/>
 
===Interaction of cache and self-modifying code===
On architectures without coupled data and instruction cache (some ARM and MIPS cores) the cache synchronization must be explicitly performed by the modifying code (flush data cache and invalidate instruction cache for the modified memory area).
 
Line 166 ⟶ 173:
 
Most modern processors load the machine code before they execute it, which means that if an instruction that is too near the [[instruction pointer]] is modified, the processor will not notice, but instead execute the code as it was ''before'' it was modified. See [[prefetch input queue]] (PIQ). PC processors must handle self-modifying code correctly for backwards compatibility reasons but they are far from efficient at doing so.{{Citation needed|date=March 2008}}
 
==={{anchor|Synthesis}}Massalin's Synthesis kernel===
The Synthesis [[kernel (computer science)|kernel]] presented in [[Alexia Massalin]]'s [[Doctor of Philosophy|Ph.D.]] thesis<ref name="Massalin_1992_Synthesis"/><ref name="Henson_2008"/> is a tiny [[Unix]] kernel that takes a [[structured programming|structured]], or even [[object-oriented programming|object oriented]], approach to self-modifying code, where code is created for individual [[quaject]]s, like filehandles. Generating code for specific tasks allows the Synthesis kernel to (as a JIT interpreter might) apply a number of [[Compiler optimization|optimization]]s such as [[constant folding]] or [[common subexpression elimination]].<!-- Anyone want to go read the thesis and see what other optimizations Massalin lists? -->
 
The Synthesis kernel was very fast, but was written entirely in assembly. The resulting lack of portability has prevented Massalin's optimization ideas from being adopted by any production kernel. However, the structure of the techniques suggests that they could be captured by a higher level [[programming language|language]], albeit one more complex than existing mid-level languages. Such a language and compiler could allow development of faster operating systems and applications.
 
[[Paul Haeberli]] and Bruce Karsh have objected to the "marginalization" of self-modifying code, and optimization in general, in favor of reduced development costs.<ref name="Haeberli_1994_GraficaObscura"/>
 
==Advantages==