Virtual memory compression: Difference between revisions

Content deleted Content added
Increased thrashing: Not backed up by the provided reference; Disambiguated a link
Shortcomings: Reads much better when not split into subsections; again, please feel free to revert whatever you don't agree with, and we'll discuss it in detail
Line 17:
{{More references|section|date=January 2015}}
 
One of the primary issues is the degree to which the contents of primary storage can be compressed under real-world loads. Program code and much of the data held in primary storage is often not highly compressible, since efficient programming techniques and data architectures are designed to automatically eliminate redundancy in data sets. In order for virtual memory compression to provide measurable performance improvements, the throughput of the virtual memory system must be improved when compared to the uncompressed equivalent. Thus, the additional amount of processing introduced by the compression must not increase the overall latency. However, in I/O-bound systems or applications with highly compressible data sets, the gains can be substantial.
===Low compression ratios===
One of the primary issues is the degree to which the contents of primary storage can be compressed under real-world loads. Program code and much of the data held in primary storage is often not highly compressible, since efficient programming techniques and data architectures are designed to automatically eliminate redundancy in data sets.
 
===Background I/O===
In order for virtual memory compression to provide measurable performance improvements, the throughput of the virtual memory system must be improved when compared to the uncompressed equivalent. Thus, the additional amount of processing introduced by the compression must not increase the overall latency. However, in I/O-bound systems or applications with highly compressible data sets, the gains can be substantial.
 
===Increased thrashing===
Less obvious is the fact that the memory used by the compression system reduces the available system memory and thus causes a corresponding increase in overall paging activity. As more primary storage is used to store compressed data, less primary storage is available to programs, causing the level of paging activity to increase, reducing the effectiveness of the compression system.
 
Line 30 ⟶ 25:
For example, in order to maximize the use of the primary storage cache of compressed pages, [[Helix Software Company]]’s Hurricane 2.0 provides a user-configurable threshold that allows the rejection level for compression to be adjusted. The program would compress the first 256 to 512 bytes of a 4&nbsp;KiB page and if that small region achieved the designated level of compression, the rest of the page would be compressed and then retained in a primary storage buffer, while all others would be sent to auxiliary storage through the normal paging system. The default setting for this threshold was an 8:1 compression ratio.<ref name="PCMAG-HURR-2"/>
 
===Price/performance issues===
In hardware implementations the technology also relies on price differentials between the various components of the system, for example, the difference between the cost of RAM and the cost of a processor dedicated to compression. The relative price/performance differences of the various components tend to vary over time. For example, the addition of a compression co-processor may have minimal impact on the cost of a CPU.