Content deleted Content added
try'd to unPOV abit page + example and discc of asm style selfmod'ing |
-'Encyclopidic' text should present knowledge, not recommend programming styles.. (mov'd txt around abit...) |
||
Line 3:
==Assembly style self-modifying code:==
The kinds of self-modifying code that used in assembly, can be for various purpuses:
In some cases self-modifying code executes slower on modern processors. This is because a modern processor will usually try to keep blocks of code in its [[cache]] memory. Each time the program rewrites a part of itself, the rewritten part must be loaded into the cache again, which results in a slight delay.▼
<1> Optimisation of a state dependant loop.
<2> runtime code generation, or specialisation of an algorithm in runtime or loadtime (which is popular, for examplein the ___domain of real-time graphics)
<3> alter inlined state of an [[object]], simulating the high level construction of [[closures]]
Types <2>,<3> are proboably the kinds mostly used also in high-level languages, such as [[LISP]].
Pseudo-code example of type <1>:
The cache invalidation issue on modern processors, usualy means, that self modifing code would still be faster, only when the modification will occure seldomly. Such as in the case of a state switching in a inner loop. Consider the following pseudo-code exaple:▼
repeat N times {
Line 32 ⟶ 38:
Choosing this sollution will have to depend ofcourse on the value of 'N' and the frequency of state changing.
This concideration is not unique to processors with code cache, since on any processor, rewriting the code never comes for free.▼
Some claim that use of self-modifying code is not recomended when a viable alternative exists, because such code can be difficult to understand and maintain.
Line 39 ⟶ 43:
Others, simply view self-modifying code as something one would be doing while editing code (in the above example, replacing a line, or keyword), only done in run-time.
▲In some cases self-modifying code executes slower on modern processors. This is because a modern processor will usually try to keep blocks of code in its [[cache]] memory. Each time the program rewrites a part of itself, the rewritten part must be loaded into the cache again, which results in a slight delay.
▲The other kind of self-modifying code, is runtime code generation, or specialisation of an algorithm in runtime or loadtime (which is popular in the ___domain of real-time graphics), or the patching of subrouting address calling, as done usually at load time of [[dynamic libraries]]. Wherther this is regarded 'self-modifying code' or not is a case or terminology.
▲The cache invalidation issue on modern processors, usualy means, that self modifing code would still be faster, only when the modification will occure seldomly. Such as in the case of a state switching in a inner loop.
▲This concideration is not unique to processors with code cache, since on any processor, rewriting the code never
Self-modifying code was used in the early days of computers in order to save memory space, which was limited. It was also used to implement [[subroutine]] calls and returns when the instruction set only provided simple branching or skipping instructions to vary the flow of control (this is still relevant in certain ultra-[[RISC]] architectures, at least theoretically, e.g. one such system has a sole branching instruction with three operands: subtract-and-branch-if-negative).
|