Virtual memory compression: Difference between revisions

Content deleted Content added
Benefits: To me, this reads much better, please feel free to revert if you disagree
Better alternatives: How can better on-CPU caching, or improvements in the I/O channel speed, reduce the amount of paging? To me, this sections makes no sense, so deleted it
Line 35:
===Prioritization===
In a typical virtual memory implementation, paging happens on a [[least recently used]] basis, potentially causing the compression algorithm to use up CPU cycles dealing with the lowest priority data. Furthermore, program code is usually read-only, and is therefore never paged-out. Instead code is simply discarded, and re-loaded from the program’s auxiliary storage file if needed. In this case the bar for compression is higher, since the I/O cycle it is attempting to eliminate is much shorter, particularly on flash memory devices.
 
===Better alternatives===
Other performance enhancements, such as optimizations to the operating systems and on-CPU caching, and improvements in the speed of the I/O channel, both reduce the amount of paging and improve the speed at which paging happens, potentially reducing or eliminating any advantage provided by compression technology.
 
==History==