Virtual memory compression: Difference between revisions

Content deleted Content added
AnomieBOT (talk | contribs)
m Dating maintenance tags: {{Copypaste}}
Types: Rewrote some awkwardly worded sentences without changing the essential meaning
Line 12:
There are two general types of virtual memory compression : (1) sending compressed pages to a swap file in main memory, possibly with a backing store in auxiliary storage,<ref name ="CaseForCompressedCaching"/><ref name="zram_kernel_org">{{cite web |url=https://www.kernel.org/doc/html/next/admin-guide/blockdev/zram.html |title="zram: Compressed RAM-based block devices" |last="Gupta" |first="Nitin" |website=docs.kernel.org |publisher="The kernel development community" |access-date=2023-12-29 }}</ref><ref name="zswap_kernel_org">{{cite web |url=https://www.kernel.org/doc/html/v4.18/vm/zswap.html |title="zswap" |website=www.kernel.org |publisher="The kernel development community" |access-date=2023-12-29 }}</ref> and (2) storing compressed pages side-by-side with uncompressed pages.<ref name="CaseForCompressedCaching"/>
 
The first type (1) usually uses some sort of [[LZ77_and_LZ78|LZ]] class dictionary compression algorithm combined with [[entropy coding]], such as [[Lempel–Ziv–Oberhumer|LZO]] or [[LZ4_(compression_algorithm)|LZ4]],<ref name="zswap_kernel_org" /><ref name="zram_kernel_org" /> to compress the pages being swapped out. Once compressed, they are either stored in a swap file in main memory, or written to auxiliary storage, such as a hard disk.<ref name="zswap_kernel_org" /><ref name="zram_kernel_org" /> A two stage process can be used instead wherein there exists both a backing store in auxiliary storage and a swap file in main memory and pages that are evicted from the in-memory swap file are written to the backing store with a greatlymuch decreasedincreased write bandwidth need(eg. duepages/sec) so that writing to the compressionbacking store takes less time. This last scheme leverages the benefits of both previous methods : fast in-memory data access with a greatlarge increase in the total amount of data that can be swapped out and aan decreasedincreased bandwidth requirementin forwriting datapages written(pages/sec) to auxiliary storage.<ref name="zswap_kernel_org" /><ref name="zram_kernel_org" /><ref name ="CaseForCompressedCaching"/>
 
One example of a class of algorithms for type (2) virtual memory compression is the WK (Wilson-Kaplan et. al) class of compression algorithms. whichThese take advantage of in-memory data regularities present in pointers and integers.<ref name ="CaseForCompressedCaching"/><ref name="SIMPSON" /> Specifically, in (the data segment -- the WK algorithms are not suitable for instruction compression<ref name ="CaseForCompressedCaching"/>) target code generated by most high-level programming languages, both integers and pointers are often present in records whose elements are word-aligned. Furthermore, the values stored in integers are usually small. Also pointers close to each other in memory tend to point to locations that are themselves nearby in memory. Additionally, common data patterns such as a word of all zeroes can be encoded in the compressed output by a very small code (two bits in the case of WKdm). Using these data regularities, the WK class of algorithms use a very small dictionary ( 16 entries in the case of [[WKdm]] ) to achieve up to a 2:1 compression ratio while achieving much greater speeds and having less overhead than LZ class dictionary compression schemes.<ref name ="CaseForCompressedCaching"/><ref name="SIMPSON" />
 
==Benefits==