Virtual memory compression: Difference between revisions

Content deleted Content added
Better alternatives: How can better on-CPU caching, or improvements in the I/O channel speed, reduce the amount of paging? To me, this sections makes no sense, so deleted it
Prioritization: Sorry, but I really don't understand what this tries to convey? If there's no free RAM, how can it be better not to deal "with the lowest priority data"? Until it's clarified, it might be better not to have it at all
Line 32:
===Price/performance issues===
In hardware implementations the technology also relies on price differentials between the various components of the system, for example, the difference between the cost of RAM and the cost of a processor dedicated to compression. The relative price/performance differences of the various components tend to vary over time. For example, the addition of a compression co-processor may have minimal impact on the cost of a CPU.
 
===Prioritization===
In a typical virtual memory implementation, paging happens on a [[least recently used]] basis, potentially causing the compression algorithm to use up CPU cycles dealing with the lowest priority data. Furthermore, program code is usually read-only, and is therefore never paged-out. Instead code is simply discarded, and re-loaded from the program’s auxiliary storage file if needed. In this case the bar for compression is higher, since the I/O cycle it is attempting to eliminate is much shorter, particularly on flash memory devices.
 
==History==