Content deleted Content added
m Open access bot: url-access=subscription updated in citation with #oabot. |
|||
(48 intermediate revisions by 24 users not shown) | |||
Line 1:
{{Short description|Algorithms for compressing in-memory data}}
{{Use dmy dates|date=May 2019|cs1-dates=y}}
{{Use list-defined references|date=December 2021}}
'''Virtual memory compression''' (also referred to as '''RAM compression''' and '''memory compression''') is a [[memory management]] technique that utilizes [[data compression]] to reduce the size or number of [[paging]] requests to and from the [[auxiliary storage]].<ref name ="CaseForCompressedCaching"/> In a virtual memory compression system, pages to be paged out of virtual memory are compressed and stored in [[physical memory]], which is usually [[random-access memory]] (RAM), or sent as compressed to auxiliary storage such as a [[hard disk drive]] (HDD) or [[solid-state drive]] (SSD). In both cases the [[virtual memory]] range, whose contents has been compressed, is marked inaccessible so that attempts to access compressed pages can trigger [[page fault]]s and reversal of the process (retrieval from auxiliary storage and decompression). The footprint of the data being paged is reduced by the compression process; in the first instance, the freed RAM is returned to the available physical memory pool, while the compressed portion is kept in RAM. In the second instance, the compressed data is sent to auxiliary storage but the resulting I/O operation is smaller and therefore takes less time.<ref name="PAT-5559978"/><ref name="PAT-5785474"/>
In some implementations, including [[zswap]], [[zram]] and [[Helix Software Company]]’s [[Helix Hurricane|Hurricane]], the entire process is implemented in software. In other systems, such as IBM's MXT, the compression process occurs in a dedicated processor that handles transfers between a local [[Cache (computing)|cache]] and RAM.
Virtual memory compression is distinct from [[garbage collection (computer science)|garbage collection]] (GC) systems, which remove unused memory blocks and in some cases consolidate used memory regions, reducing fragmentation and improving efficiency. Virtual memory compression is also distinct from [[context switching]] systems, such as [[Connectix]]'s [[RAM Doubler]] (though it also did online compression) and Apple OS 7.1, in which inactive processes are suspended and then compressed as a whole.<ref name="CWORLD-RD2"/>
==Types==
There are two general types of virtual memory compression : (1) sending compressed pages to a swap file in main memory, possibly with a backing store in auxiliary storage,<ref name ="CaseForCompressedCaching"/><ref name="zram_kernel_org">{{cite web |url=https://www.kernel.org/doc/html/next/admin-guide/blockdev/zram.html |title="zram: Compressed RAM-based block devices" |last="Gupta" |first="Nitin" |website=docs.kernel.org |publisher="The kernel development community" |access-date=2023-12-29 }}</ref><ref name="zswap_kernel_org">{{cite web |url=https://www.kernel.org/doc/html/v4.18/vm/zswap.html |title="zswap" |website=www.kernel.org |publisher="The kernel development community" |access-date=2023-12-29 }}</ref> and (2) storing compressed pages side-by-side with uncompressed pages.<ref name="CaseForCompressedCaching"/>
The first type (1) usually uses some sort of [[LZ77_and_LZ78|LZ]] class dictionary compression algorithm combined with [[entropy coding]], such as [[Lempel–Ziv–Oberhumer|LZO]] or [[LZ4_(compression_algorithm)|LZ4]],<ref name="zswap_kernel_org" /><ref name="zram_kernel_org" /> to compress the pages being swapped out. Once compressed, they are either stored in a swap file in main memory, or written to auxiliary storage, such as a hard disk.<ref name="zswap_kernel_org" /><ref name="zram_kernel_org" /> A two stage process can be used instead wherein there exists both a backing store in auxiliary storage and a swap file in main memory and pages that are evicted from the in-memory swap file are written to the backing store with a much increased write bandwidth (eg. pages/sec) so that writing to the backing store takes less time. This last scheme leverages the benefits of both previous methods : fast in-memory data access with a large increase in the total amount of data that can be swapped out and an increased bandwidth in writing pages (pages/sec) to auxiliary storage.<ref name="zswap_kernel_org" /><ref name="zram_kernel_org" /><ref name ="CaseForCompressedCaching"/>
One example of a class of algorithms for type (2) virtual memory compression is the WK (Wilson-Kaplan et. al) class of compression algorithms. These take advantage of in-memory data regularities present in pointers and integers.<ref name ="CaseForCompressedCaching"/><ref name="SIMPSON" /> Specifically, in (the data segment -- the WK algorithms are not suitable for instruction compression<ref name ="CaseForCompressedCaching"/>) target code generated by most high-level programming languages, both integers and pointers are often present in records whose elements are word-aligned. Furthermore, the values stored in integers are usually small. Also pointers close to each other in memory tend to point to locations that are themselves nearby in memory. Additionally, common data patterns such as a word of all zeroes can be encoded in the compressed output by a very small code (two bits in the case of WKdm). Using these data regularities, the WK class of algorithms use a very small dictionary ( 16 entries in the case of [[WKdm]] ) to achieve up to a 2:1 compression ratio while achieving much greater speeds and having less overhead than LZ class dictionary compression schemes.<ref name ="CaseForCompressedCaching"/><ref name="SIMPSON" />
==Benefits==
By reducing the I/O activity caused by paging requests, virtual memory compression can produce overall performance improvements. The degree of performance improvement depends on a variety of factors, including the availability of any compression co-processors, spare bandwidth on the CPU, speed of the I/O channel, speed of the physical memory, and the compressibility of the physical memory contents.
On multi-core, multithreaded CPUs, some benchmarks show performance improvements of over 50%.<ref name="zswap-bench"/><ref name="ZRAM-BENCH"/>
In some situations, such as in [[embedded device]]s, auxiliary storage is limited or non-existent.
==Shortcomings==
{{More
===Low compression ratios===
One of the primary issues is the degree to which the contents of physical memory can be compressed under real-world loads.
===Background I/O===
In order for virtual memory compression to provide measurable performance improvements, the throughput of the virtual memory system must be improved when compared to the uncompressed equivalent.
===Increased thrashing===
The physical memory used by a compression system reduces the amount of physical memory available to [[Process (computing)|processes]] that a system runs, which may result in increased paging activity and reduced overall effectiveness of virtual memory compression. This relationship between the paging activity and available physical memory is roughly exponential, meaning that reducing the amount of physical memory available to system processes results in an exponential increase of paging activity.<ref name="DENNING"
In circumstances where the amount of free physical memory is low and paging is fairly prevalent, any performance gains provided by the compression system (compared to paging directly to and from auxiliary storage) may be offset by an increased [[page fault]] rate that leads to [[thrashing (computer science)|thrashing]] and degraded system performance.
For example, in order to maximize the use of a compressed pages cache, [[Helix Software Company]]
===CPU utilization overhead===
In hardware implementations, the technology also relies on price differentials between the various components of the system, for example, the difference between the cost of RAM and the cost of a processor dedicated to compression.
===Prioritization===
In a typical virtual memory implementation, paging happens on a [[least recently used]] basis, potentially causing the compression algorithm to use up CPU cycles dealing with the lowest priority data.
==History==
Virtual memory compression has gone in and out of favor as a technology.
===Origins===
[[Acorn Computers]]' Unix variant, [[RISC iX]], was supplied as the primary operating system for its R140 workstation released in 1989.<ref name="acornuser198912">{{cite magazine | url=https://archive.org/details/AcornUser089-Dec89/page/n67/mode/2up | magazine=Acorn User | title=Power to the People | last1=Cox | first1=James | date=December 1989 | access-date=6 September 2020 | pages=66-67,69,71}}</ref> RISC iX provided support for demand paging of compressed executable files. However, the principal motivation for providing compressed executable files was to accommodate a complete Unix system in a hard disk of relatively modest size. Compressed data was not paged out to disk under this scheme.<ref name="taunton1991">{{cite book |chapter-url=https://archive.org/details/1991-proceedings-tech-conference-nashville/page/385/mode/1up | title=Proceedings of the Summer 1991 USENIX Conference, Nashville, TN, USA, June 1991 | publisher=USENIX Association | year=1991 | last1=Taunton | first1=Mark | chapter=Compressed Executables: An Exercise in Thinking Small | pages=385–404}}</ref><ref name="taunton1991_unix_internals">{{cite newsgroup | title=Compressed executables | date=22 January 1991 | access-date=10 October 2020 | last1=Taunton | first1=Mark | newsgroup=comp.unix.internals | message-id=4743@acorn.co.uk | url=https://groups.google.com/d/msg/comp.unix.internals/mGP6CTNdfDI/4NKaA4_rIxgJ}}</ref>
Paul R. Wilson proposed compressed caching of virtual memory pages in 1990, in a paper circulated at the ACM OOPSLA/ECOOP '90 Workshop on Garbage Collection ("Some Issues and Strategies in Heap Management and Memory Hierarchies"), and appearing in ACM SIGPLAN Notices in January 1991.<ref name ="WilsonIssuesStrategies"/>
[[Helix Software Company]] pioneered virtual memory compression in 1992, filing a patent application for the process in October of that year.<ref name="PAT-5559978"/> In 1994 and 1995, Helix refined the process using test-compression and secondary memory caches on video cards and other devices.<ref name="PAT-5785474"/> However, Helix did not release a product incorporating virtual memory compression until July 1996 and the release of Hurricane 2.0, which used the [[Stac Electronics]] [[Lempel–Ziv–Stac]] compression algorithm and also used off-screen video RAM as a compression buffer to gain performance benefits.<ref name="PCMAG-HURR-2"/>
In 1995, RAM cost nearly $50 per [[megabyte]], and [[Microsoft]]'s [[Windows
In its 8 April
In 1996, IBM began experimenting with compression, and in 2000 IBM announced its Memory eXpansion Technology (MXT).<ref name="IBM-MXT-NEWS"/><ref name="IBM-MXT-PAPERS"/>
===Recent developments===
* In early 2008, a [[Linux]] project named [[zram]] (originally called compcache) was released; in a 2013 update, it was incorporated into [[
* In 2010, IBM released Active Memory Expansion (AME) for [[AIX]] 6.1 which implements virtual memory compression.<ref name="IBM-AIX-AME"
* In 2012, some versions of the [[POWER7]]+ chip included
* In December 2012, the [[zswap]] project was announced; it was merged into the [[Linux kernel mainline]] in September 2013.
* In June 2013, Apple announced that it would include virtual memory compression in [[OS
*
==See also==
* [[Disk compression]]
* [[Swap partitions on SSDs#Swap partitions|Swap partitions on SSDs]]
==References==
{{Reflist
<ref name="WilsonIssuesStrategies">
{{cite journal |author-last=Wilson |author-first=Paul R. |title=Some Issues and Strategies in Heap Management and Memory Hierarchies |journal=ACM SIGPLAN Notices |date=1991 |volume=26 |issue=3 |pages=45–52 |doi=10.1145/122167.122173|s2cid=15404854 }}</ref>
<ref name="PAT-5559978">{{cite patent|country=US|number=5559978|pubdate=1996-09-24|title=Method for increasing the efficiency of a virtual memory system by selective compression of RAM memory contents|assign1=[[Helix_Software_Company|Helix Software Co., Inc.]]|inventor1-last=Spilo|inventor1-first=Michael L.}}</ref>
<ref name="PAT-5785474">{{cite patent|country=US|number=5875474|pubdate=1999-02-23|title=Method for caching virtual memory paging and disk input/output requests using off screen video memory|assign1=[[Helix_Software_Company|Helix Software Co., Inc.]]|inventor1-last=Fabrizio|inventor1-first=Daniel|inventor2-last=Spilo|inventor2-first=Michael L.}}</ref>
<ref name="CaseForCompressedCaching">{{cite conference |url=https://www.usenix.org/legacy/event/usenix99/full_papers/wilson/wilson.pdf |title=The Case for Compressed Caching in Virtual Memory Systems |author-last1=Wilson |author-first1=Paul R. |author-last2=Kaplan |author-first2=Scott F. |author-last3=Smaragdakis |author-first3=Yannis |date= 1999-06-06 |conference=USENIX Annual Technical Conference |___location=Monterey, California, USA |pages=101–116}}</ref>
<ref name="SIMPSON">{{cite web |author-last=Simpson |author-first=Matthew |title=Analysis of Compression Algorithms for Program Data |date=2014 |url=http://www.ece.umd.edu/~barua/matt-compress-tr.pdf |access-date=2015-01-09 |pages=4-14}}</ref>
<ref name="RIZZO">{{cite journal |author-last=Rizzo |author-first=Luigi |title=A very fast algorithm for RAM compression |journal=ACM SIGOPS Operating Systems Review |date=1996 |volume=31 |issue=2 |url=http://dl.acm.org/citation.cfm?id=250012 |access-date=2015-01-09 |page=8|doi=10.1145/250007.250012 |s2cid=18563587 |url-access=subscription }}</ref>
<ref name="DENNING">{{cite journal |author-last=Denning |author-first=Peter J. |title=Thrashing: Its causes and prevention |journal=Proceedings AFIPS, Fall Joint Computer Conference |date=1968 |url=http://www.cs.uwaterloo.ca/~brecht/courses/702/Possible-Readings/vm-and-gc/thrashing-denning-afips-1968.pdf |access-date=2015-01-05 |page=918 |volume=33}}</ref>
<ref name="FREEDMAN">{{cite web |author-last=Freedman |author-first=Michael J. |title=The Compression Cache: Virtual Memory Compression for Handheld Computers |url=http://www.cs.princeton.edu/~mfreed//docs/6.033/compression.pdf |date=2000-03-16 |access-date=2015-01-09}}</ref>
<ref name="CWORLD-RD2">{{cite journal |url=https://books.google.com/books?id=BUaIcc6lsdwC&pg=PA56 |title=Mac Memory Booster Gets an Upgrade |journal=[[Computerworld]] |publisher=IDG Enterprise |date=9 September 1996 |issn=0010-4841 |volume=30 |number =37 |page=56 |access-date=2015-01-12}}</ref>
<ref name="PCMAG-HURR-2">{{cite journal |url=https://books.google.com/books?id=7WGv1D0tOVYC&pg=PA48 |title=Hurricane 2.0 Squeezes the Most Memory from Your System |journal=[[PC Magazine]] |date=1996-10-08 |access-date=2015-01-01}}</ref>
<ref name="PCMAG-PERF">{{cite journal |url=https://books.google.com/books?id=8RSHdk84u50C&pg=RA1-PA165 |title=Performance Enhancers |journal=[[PC Magazine]] |date=1997-04-08 |access-date=2015-01-01}}</ref>
<ref name="SoftRAM">{{cite journal |url=https://books.google.com/books?id=XcEKP0ml18EC&pg=PA34 |title=SoftRAM Under Scruitny |journal=[[PC Magazine]] |date=1996-01-23 |access-date=2015-01-01}}</ref>
<ref name="IBM-MXT-PERF">{{cite web |url=http://www.kkant.net/papers/caecw.doc |title=An Evaluation of Memory Compression Alternatives |author-first=Krishna |author-last=Kant |publisher=[[Intel Corporation]] |date=2003-02-01 |access-date=2015-01-01}}</ref>
<ref name="IBM-MXT-NEWS">{{cite web |url=http://www-03.ibm.com/press/us/en/pressrelease/1653.wss |archive-url=https://web.archive.org/web/20130622050529/http://www-03.ibm.com/press/us/en/pressrelease/1653.wss |url-status=dead |archive-date=22 June 2013 |title=IBM Research Breakthrough Doubles Computer Memory Capacity |publisher=[[IBM]] |date=2000-06-26 |access-date=2015-01-01}}</ref>
<ref name="IBM-MXT-PAPERS">{{cite web |url=http://researcher.watson.ibm.com/researcher/view_group_pubs.php?grp=2917 |title=Memory eXpansion Technologies |publisher=[[IBM]] |access-date=2015-01-01}}</ref>
<ref name="zswap-bench">{{cite web |url=https://events.linuxfoundation.org/sites/events/files/slides/tmc_sjennings_linuxcon2013.pdf |title=Transparent Memory Compression in Linux |author-first=Seth |author-last=Jennings |website=linuxfoundation.org |access-date=2015-01-01 |archive-date=2015-01-04 |archive-url=https://web.archive.org/web/20150104214723/https://events.linuxfoundation.org/sites/events/files/slides/tmc_sjennings_linuxcon2013.pdf |url-status=dead }}</ref>
<ref name="zram-google-page">{{cite web |url=https://code.google.com/p/compcache/ |title=CompCache |publisher=Google code |access-date=2015-01-01}}</ref>
<ref name="IBM-AIX-AME">{{cite web |url=https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101633 |title=AIX 6.1 Active Memory Expansion |publisher=[[IBM]] |access-date=2015-01-01 |archive-date=2015-01-04 |archive-url=https://web.archive.org/web/20150104210445/https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101633 |url-status=dead }}</ref>
<ref name="IBM-POWER7+">{{cite web |url=http://www-05.ibm.com/cz/events/febannouncement2012/pdf/power_architecture.pdf |title=IBM Power Systems Hardware Deep Dive |publisher=[[IBM]] |access-date=2015-01-01 |archive-date=2015-01-04 |archive-url=https://web.archive.org/web/20150104205645/http://www-05.ibm.com/cz/events/febannouncement2012/pdf/power_architecture.pdf |url-status=dead }}</ref>
<ref name="ZRAM-BENCH">{{cite web |url=https://code.google.com/p/compcache/wiki/Performance |title=Performance numbers for compcache |access-date=2015-01-01}}</ref>
<ref name="WIN95-REQ">{{cite web |url=http://support.microsoft.com/kb/138349/en-us |title=Windows 95 Installation Requirements |publisher=[[Microsoft]] |access-date=2015-01-01}}</ref>
<ref name="Arstechnica">{{Cite web|url=https://arstechnica.com/apple/2013/10/os-x-10-9/17/#compressed-memory|title=OS X 10.9 Mavericks: The Ars Technica Review|date=22 October 2013}}</ref>
<ref name="Willson_Usenix">{{Cite web|url=https://www.usenix.org/legacy/publications/library/proceedings/usenix01/cfp/wilson/wilson_html/acc.html|title = The Case for Compressed Caching in Virtual Memory Systems}}</ref>
<ref name="Aul_2015">{{cite web |author-last=Aul |author-first=Gabe |url=https://blogs.windows.com/windows-insider/2015/08/18/announcing-windows-10-insider-preview-build-10525/ |title=Announcing Windows 10 Insider Preview Build 10525 |work=Windows Insider Blog |publisher=[[Microsoft]] |date=2015-08-18 |access-date=2024-08-03}}</ref>
<ref name="Paul_1997_NWDOSTIP">{{cite book |title=NWDOS-TIPs — Tips & Tricks rund um Novell DOS 7, mit Blick auf undokumentierte Details, Bugs und Workarounds |chapter=Kapitel II.18. Mit STACKER Hauptspeicher 'virtuell' verdoppeln… |language=de |trans-title=NWDOS-TIPs — Tips & tricks for Novell DOS 7, with a focus on undocumented details, bugs and workarounds |trans-chapter=Utilizing STACKER to 'virtually' double main memory… |author-first=Matthias R. |author-last=Paul |date=1997-07-30 |orig-year=1996-04-14 |edition=3 |version=Release 157 |url=http://www.antonis.de/dos/dos-tuts/mpdostip/html/nwdostip.htm |access-date=2012-01-11 |url-status=live |archive-url=https://web.archive.org/web/20161105172944/http://www.antonis.de/dos/dos-tuts/mpdostip/html/nwdostip.htm |archive-date=2016-11-05}}</ref>
}}
{{Memory management navbox}}
{{Operating system}}
|