Virtual memory compression: Difference between revisions

Content deleted Content added
Shortcomings: added subheadings
Line 22:
 
==Shortcomings==
 
===Low Compression Ratios===
A primary issue is the degree to which the contents of primary storage can be compressed under real-world loads. Program code and much of the data held in primary storage is often not highly compressible, since efficient programming techniques and data architectures are designed to automatically eliminate redundancy in data sets.
 
===Background I/O===
In order for virtual memory compression to provide any performance improvement, the throughput of the virtual memory system must be improved vs. the uncompressed equivalent. Thus the overhead of the compression must be lower than the throughput of the paging system without the compression under the same load. This can be difficult to achieve given that in most instances much of the paging I/O happens as a background process, non-blockingwhich processallows other processes to continue. However, in I/O bound systems or applications, with highly compressible data sets, the gains can be impressive.
 
===Increased Thrashing===
Less obvious is the fact that the memory used by the compression system reduces the available system memory and thus causes a corresponding increase in over-all paging activity. As more primary storage is used to store compressed data, less primary storage is available to programs, causing the level of paging activity to increase, reducing the effectiveness of the compression system. Since relationship between paging activity and available memory is exponential, any gains in available memory tend to be offset by significant increase in [[thrashing]].<ref name="DENNING"/> Again, the more compressible the data, the more pronounced the performance improvement, because less primary storage is needed to hold the compressed data.
 
Since the relationship between paging activity and available memory is exponential, any gains in available memory tend to be offset by significant increases in [[thrashing]].<ref name="DENNING"/> Again, the more compressible the data, the more pronounced the performance improvement, because less primary storage is needed to hold the compressed data.
 
For example, in order to maximize the use of the primary storage cache of compressed pages, [[Helix Software Company]]’s Hurricane 2.0 provided a user-settable threshold which allowed adjustment of a rejection level for compression. The program would compress the first 256 to 512 bytes of a 4K page and if that small region achieved the designated level of compression, the rest of the page would be compressed and then retained in a primary storage buffer, while all others would be sent to secondary storage through the normal paging system. The default setting for this threshold was a compression ratio of 8:1.<ref name="PCMAG-HURR-2"/>
 
===Price/Performance of Hardware===
In hardware implementations the technology also relies on price differentials between the various components of the system, for example, the difference between the cost of RAM and the cost of a processor dedicated to compression. The relative price/performance differences of the various components tend to vary over time. For example, the addition of a compression co-processor may have minimal impact on the cost of a CPU.
 
===Prioritization===
Other performance enhancements, such as optimizations to the operating systems and on-CPU caching, and improvements in the speed of the I/O channel, both reduce the amount of paging and improve the speed at which paging happens, potentially reducing or eliminating any advantage provided by compression technology.
 
In a typical virtual memory implementation, paging happens on a [[least recently used]] basis, potentially causing the compression algorithm to use up CPU cycles dealing with the lowest priority data. Furthermore, program code is usually read-only, and is therefore never paged-out. Instead code is simply discarded, and re-loaded from the program’s secondary storage file if needed. In this case the bar for compression is higher, since the I/O cycle it is attempting to eliminate is much shorter, particularly on flash memory devices.
 
===Better Alternatives===
Other performance enhancements, such as optimizations to the operating systems and on-CPU caching, and improvements in the speed of the I/O channel, both reduce the amount of paging and improve the speed at which paging happens, potentially reducing or eliminating any advantage provided by compression technology.
 
==History==