Virtual memory compression: Difference between revisions

Content deleted Content added
adding quantization
Line 21:
 
===Low compression ratios===
One of the primary issues is the degree to which the contents of physical memory can be compressed under real-world loads. Program code and much of the data held in physical memory is often not highly compressible, since efficient programming techniques and data architectures are designed to automatically eliminate redundancy in data sets. Various studies show typical [[data compression ratio]]s ranging from 2:1 to 2.5:1 for program data,<ref name="SIMPSON"/><ref name="RIZZO"/> similar to typically achievalachievable compression ratios with [[disk compression]].<ref name="Paul_1997_NWDOSTIP"/>
 
===Background I/O===
Line 38:
===Prioritization===
In a typical virtual memory implementation, paging happens on a [[least recently used]] basis, potentially causing the compression algorithm to use up CPU cycles dealing with the lowest priority data. Furthermore, program code is usually read-only, and is therefore never paged-out. Instead code is simply discarded, and re-loaded from the program’s auxiliary storage file if needed. In this case the bar for compression is higher, since the I/O cycle it is attempting to eliminate is much shorter, particularly on flash memory devices.
 
==Compression using quantization==
Accelerator designers exploit quantization to reduce the bitwidth