Virtual memory compression: Difference between revisions

Content deleted Content added
CE, improved and added refs
Line 1:
{{Use mdydmy dates|date=JanuaryMay 20152019|cs1-dates=y}}
'''Virtual memory compression''' (also referred to as '''RAM compression''' and '''memory compression''') is a [[memory management]] technique that utilizes [[data compression]] to reduce the size or number of [[paging]] requests to and from the [[auxiliary storage]].<ref name ="CaseForCompressedCaching"/> In a virtual memory compression system, paging requests are compressed and stored in [[physical memory]], which is usually [[random-access memory]] (RAM), or sent as compressed to auxiliary storage such as a [[hard disk drive]] (HDD) or [[solid-state drive]] (SSD). In both cases the [[virtual memory]] range whose contents has been compressed during the paging request is marked inaccessible so that attempts to access compressed pages can trigger [[page fault]]s and reversal of the process (retrieval from auxiliary storage and decompression). The footprint of the data being paged is reduced by the compression process; in the first instance, the freed RAM is returned to the available physical memory pool, while the compressed portion is kept in RAM. In the second instance, the compressed data is sent to auxiliary storage but the resulting I/O operation is smaller and therefore takes less time.<ref name="PAT-5559978"/><ref name="PAT-5785474"/>
 
In some implementations, including [[zswap]], [[zram]] and [[Helix Software Company]]’s [[Helix Hurricane|Hurricane]], the entire process is implemented in software. In other systems, such as IBM's MXT, the compression process occurs in a dedicated processor that handles transfers between a local [[Cache (computing)|cache]] and RAM.
'''Virtual memory compression''' (also referred to as '''RAM compression''' and '''memory compression''') is a [[memory management]] technique that utilizes [[data compression]] to reduce the size or number of [[paging]] requests to and from the [[auxiliary storage]].<ref name ="CaseForCompressedCaching"/> In a virtual memory compression system, paging requests are compressed and stored in [[physical memory]], which is usually [[random-access memory]] (RAM), or sent as compressed to auxiliary storage such as a [[hard disk drive]] (HDD) or [[solid-state drive]] (SSD). In both cases the [[virtual memory]] range whose contents has been compressed during the paging request is marked inaccessible so that attempts to access compressed pages can trigger [[page fault]]s and reversal of the process (retrieval from auxiliary storage and decompression). The footprint of the data being paged is reduced by the compression process; in the first instance, the freed RAM is returned to the available physical memory pool, while the compressed portion is kept in RAM. In the second instance, the compressed data is sent to auxiliary storage but the resulting I/O operation is smaller and therefore takes less time.<ref name="PAT-5559978"/><ref name="PAT-5785474"/>
 
Virtual memory compression is distinct from [[garbage collection (computer science)|garbage collection]] (GC) systems, which remove unused memory blocks and in some cases consolidate used memory regions, reducing fragmentation and improving efficiency. Virtual memory compression is also distinct from [[context switching]] systems, such as [[Connectix]]'s [[RAM Doubler]] (though it also did online compression) and Apple OS 7.1, in which inactive processes are suspended and then compressed as a whole.<ref name="CWORLD-RD2"/>
In some implementations, including [[zswap]], [[zram]] and [[Helix Software Company]]’s Hurricane, the entire process is implemented in software. In other systems, such as IBM's MXT, the compression process occurs in a dedicated processor that handles transfers between a local [[Cache (computing)|cache]] and RAM.
 
Virtual memory compression is distinct from [[garbage collection (computer science)|garbage collection]] (GC) systems, which remove unused memory blocks and in some cases consolidate used memory regions, reducing fragmentation and improving efficiency. Virtual memory compression is also distinct from [[context switching]] systems, such as Connectix’s RAM Doubler (though it also did online compression) and Apple OS 7.1, in which inactive processes are suspended and then compressed as a whole.<ref name="CWORLD-RD2"/>
 
==Benefits==
{{UnreferencedMore references|section|date=JanuaryMay 20152019}}
 
By reducing the I/O activity caused by paging requests, virtual memory compression can produce overall performance improvements. The degree of performance improvement depends on a variety of factors, including the availability of any compression co-processors, spare bandwidth on the CPU, speed of the I/O channel, speed of the physical memory, and the compressibility of the physical memory contents.
 
On multi-core, multithreaded CPUs, some benchmarks show performance improvements of over 50%.<ref name="zswap-bench"/><ref name="ZRAM-BENCH"/>
 
In some situations, such as in [[embedded device]]s, auxiliary storage is limited or non-existent. In these cases, virtual memory compression can allow a virtual memory system to operate, where otherwise virtual memory would have to be disabled. This allows the system to run certain software which would otherwise be unable to operate in an environment with no virtual memory.<ref name="Paul_1997_NWDOSTIP"/>
 
[[Flash memory]] has certain endurance limitations on the maximum number of erase cycles it can undergo, which can be as low as 100 erase cycles. In systems where Flash Memory is used as the only auxiliary storage system, implementing virtual memory compression can reduce the total quantity of data being written to auxiliary storage, improving system reliability.
 
==Shortcomings==
Line 22 ⟶ 21:
 
===Low compression ratios===
One of the primary issues is the degree to which the contents of physical memory can be compressed under real-world loads. Program code and much of the data held in physical memory is often not highly compressible, since efficient programming techniques and data architectures are designed to automatically eliminate redundancy in data sets. Various studies show typical [[data compression ratio]]s ranging from 2:1 to 2.5:1 for program data,<ref name="SIMPSON"/><ref name="RIZZO"/> similar to typically achieval compression ratios with [[disk compression]].<ref name="Paul_1997_NWDOSTIP"/>
<!-- NB. I would not consider these ratios to be "low", they are perfectly in line with what can be expected for binary code in general. -->
One of the primary issues is the degree to which the contents of physical memory can be compressed under real-world loads. Program code and much of the data held in physical memory is often not highly compressible, since efficient programming techniques and data architectures are designed to automatically eliminate redundancy in data sets. Various studies show typical [[data compression ratio]]s ranging from 2:1 to 2.5:1 for program data,<ref name="SIMPSON"/><ref name="RIZZO"/> similar to typically achieval compression ratios with [[disk compression]].<!-- no surprise, but additional support of the numbers -->
 
===Background I/O===
In order for virtual memory compression to provide measurable performance improvements, the throughput of the virtual memory system must be improved when compared to the uncompressed equivalent. Thus, the additional amount of processing introduced by the compression must not increase the overall latency. However, in [[I/O-bound]] systems or applications with highly compressible data sets, the gains can be substantial.<ref name="Paul_1997_NWDOSTIP"/>
 
===Increased thrashing===
The physical memory used by a compression system reduces the amount of physical memory available to [[Process (computing)|processes]] that a system runs, which may result in increased paging activity and reduced overall effectiveness of virtual memory compression. This relationship between the paging activity and available physical memory is roughly exponential, meaning that reducing the amount of physical memory available to system processes results in an exponential increase of paging activity.<ref name="DENNING" /><ref name="FREEDMAN" />
 
In circumstances where the amount of free physical memory is low and paging is fairly prevalent, any performance gains provided by the compression system (compared to paging directly to and from auxiliary storage) may be offset by an increased [[page fault]] rate that leads to [[thrashing (computer science)|thrashing]] and degraded system performance. In an opposite state, where enough physical memory is available and paging activity is low, compression may not impact performance enough to be noticeable. The middle ground between these two circumstances{{mdashb}}low RAM with high paging activity, and plenty of RAM with low paging activity{{mdashb}}is where virtual memory compression may be most useful. However, the more compressible the program data is, the more pronounced are the performance improvements as less physical memory is needed to hold the compressed data.
 
For example, in order to maximize the use of a compressed pages cache, [[Helix Software Company]]’s's Hurricane&nbsp;2.0 provides a user-configurable compression rejection threshold. By compressing the first 256 to 512 bytes of a 4&nbsp;KiB page, this virtual memory compression system determines whether the configured compression level threshold can be achieved for a particular page; if achievable, the rest of the page would be compressed and retained in a compressed cache, and otherwise the page would be sent to auxiliary storage through the normal paging system. The default setting for this threshold is an 8:1 compression ratio.<ref name="PCMAG-HURR-2" /><ref name="CWORLD-RD2"/>
 
===Price/performance issues===
Line 39 ⟶ 37:
 
===Prioritization===
In a typical virtual memory implementation, paging happens on a [[least recently used]] basis, potentially causing the compression algorithm to use up CPU cycles dealing with the lowest priority data. Furthermore, program code is usually read-only, and is therefore never paged-out. Instead code is simply discarded, and re-loaded from the program’s auxiliary storage file if needed. In this case the bar for compression is higher, since the I/O cycle it is attempting to eliminate is much shorter, particularly on flash memory devices.
 
==History==
Virtual memory compression has gone in and out of favor as a technology. The price and speed of RAM and external storage have plummeted due to [[Moore’sMoore's Law]] and improved RAM interfaces such as [[DDR3]], thus reducing the need for virtual memory compression, while multi-core processors, server farms, and mobile technology together with the advent of flash based systems make virtual memory compression more attractive.
 
===Origins===
Paul R. Wilson proposed compressed caching of virtual memory pages in 1990, in a paper circulated at the ACM OOPSLA/ECOOP '90 Workshop on Garbage Collection ("Some Issues and Strategies in Heap Management and Memory Hierarchies"), and appearing in ACM SIGPLAN Notices in January, 1991.<ref name ="WilsonIssuesStrategies"/>
 
[[Helix Software Company]] pioneered virtual memory compression in 1992, filing a patent application for the process in October of that year.<ref name="PAT-5559978"/> In 1994 and 1995, Helix refined the process using test-compression and secondary memory caches on video cards and other devices.<ref name="PAT-5785474"/> However, Helix did not release a product incorporating virtual memory compression until July 1996 and the release of Hurricane&nbsp;2.0, which used the [[Stac Electronics]] [[Lempel–Ziv–Stac]] compression algorithm and also used off-screen video RAM as a compression buffer to gain performance benefits.<ref name="PCMAG-HURR-2"/>
 
In 1995, RAM cost nearly $50 per [[megabyte]], and [[Microsoft]]'s [[Windows&nbsp;95]] listed a minimum requirement of 4&nbsp;MB of RAM.<ref name="WIN95-REQ"/> Due to the high RAM requirement, several programs were released which claimed to use compression technology to gain “memory“memory”. Most notorious was the [[SoftRAM]] program from Syncronys Softcorp. SoftRAM was revealed to be “placebo software” which did not include any compression technology at all.<ref name="SoftRAM"/><ref name="Paul_1997_NWDOSTIP"/> Other products, including Hurricane and [[MagnaRAM]], included virtual memory compression, but implemented only [[run-length encoding]], with poor results, giving the technology a negative reputation.<ref name="PCMAG-PERF"/>
 
In its 8 April 8, 1997 issue, PC Magazine published a comprehensive test of the performance enhancement claims of several software virtual memory compression tools. In its testing PC Magazine found a minimal (5% overall) performance improvement from the use of Hurricane, and none at all from any of the other packages.<ref name="PCMAG-PERF"/> However the tests were run on Intel [[Pentium]] systems which had a single core and were single threaded, and thus compression directly impacted all system activity.
 
In 1996, IBM began experimenting with compression, and in 2000 IBM announced its Memory eXpansion Technology (MXT).<ref name="IBM-MXT-NEWS"/><ref name="IBM-MXT-PAPERS"/> MXT was a stand-alone chip which acted as a [[CPU cache]] between the CPU and memory controller. MXT had an integrated compression engine which compressed all data heading to/from physical memory. Subsequent testing of the technology by Intel showed 5&ndash;20% overall system performance improvement, similar to the results obtained by PC Magazine with Hurricane.<ref name="IBM-MXT-PERF"/>
 
===Recent developments===
* In early 2008, a [[Linux]] project named [[zram]] (originally called compcache) was released; in a 2013 update, it was incorporated into [[Chrome&nbsp;OS]]<ref name="zram-google-page" /> and [[Android (operating system)|Android]]&nbsp;4.4
* In 2010, IBM released Active Memory Expansion (AME) for [[AIX]] 6.1 which implements virtual memory compression.<ref name="IBM-AIX-AME" />
* In 2012, some versions of the [[POWER7]]+ chip included the AME hardware accelerator for data compression support, used on AIX, for virtual memory compression.<ref name="IBM-POWER7+" />
* In December 2012, the [[zswap]] project was announced; it was merged into the [[Linux kernel mainline]] in September 2013.
* In June 2013, Apple announced that it would include virtual memory compression in [[OS&nbsp;X Mavericks]], using the Wilson-Kaplan WKdm algorithm.<ref>https: name="Arstechnica"//arstechnica.com/apple/2013/10/os-x-10-9/17/#compressed-memory</ref><ref>https: name="Willson_Usenix"//www.usenix.org/legacy/publications/library/proceedings/usenix01/cfp/wilson/wilson_html/acc.html</ref>
* An AugustA 10, August 2015 "[[Windows Insider]] Preview" update for [[Windows&nbsp;10]] (build 10525) added support for RAM compression.<ref>{{cite web|lastname=Aul |first=Gabe |url=http:"Aul_2015"//blogs.windows.com/bloggingwindows/2015/08/18/announcing-windows-10-insider-preview-build-10525/ |title=Announcing Windows 10 Insider Preview Build 10525 |work=Blogging Windows |publisher=Microsoft |date=August 18, 2015 |accessdate=August 19, 2015}}</ref>
 
==See also==
Line 70 ⟶ 68:
 
==References==
{{Reflist|30em|refs=
 
<ref name="WilsonIssuesStrategies">
{{cite journal |author-last=Wilson |author-first=Paul R. |title=Some Issues and Strategies in Heap Management and Memory Hierarchies |journal=ACM SIGPLAN Notices |date=1991 |volume=26 |issue=3 |pages=45–52 |doi=10.1145/122167.122173}}</ref>
{{cite journal |
<ref name="PAT-5559978">{{cite patent |country=US |number=5559978 |status=patent}}</ref>
last = Wilson |
<ref name="PAT-5785474">{{cite patent |country=US |number=5875474 |status=patent}}</ref>
first = Paul R. |
<ref name="CaseForCompressedCaching">{{cite conference |url=https://www.usenix.org/legacy/event/usenix99/full_papers/wilson/wilson.pdf |title=The Case for Compressed Caching in Virtual Memory Systems |author-last1=Wilson |author-first1=Paul R. |author-last2=Kaplan |author-first2=Scott F. |author-last3=Smaragdakis |author-first3=Yannis |date= 1999-06-06 |conference=USENIX Annual Technical Conference |___location=Monterey, California, USA |pages=101–116}}</ref>
title = Some Issues and Strategies in Heap Management and Memory Hierarchies |
<ref name="SIMPSON">{{cite web |author-last=Simpson |author-first=Matthew |title=Analysis of Compression Algorithms for Program Data |date=2014 |url=http://www.ece.umd.edu/~barua/matt-compress-tr.pdf |access-date=2015-01-09 |pages=6}}</ref>
journal = ACM SIGPLAN Notices |
<ref name="RIZZO">{{cite journal |author-last=Rizzo |author-first=Luigi |title=A very fast algorithm for RAM compression |journal=ACM SIGOPS Operating Systems Review |date=1996 |url=http://dl.acm.org/citation.cfm?id=250012 |access-date=2015-01-09 |page=8}}</ref>
year = 1991 |
<ref name="DENNING">{{cite journal |author-last=Denning |author-first=Peter J. |title=Thrashing: Its causes and prevention |journal=Proceedings AFIPS, Fall Joint Computer Conference |date=1968 |url=http://www.cs.uwaterloo.ca/~brecht/courses/702/Possible-Readings/vm-and-gc/thrashing-denning-afips-1968.pdf |access-date=2015-01-05 |page=918 |volume=33}}</ref>
volume = 26 |
<ref name="FREEDMAN">{{cite web |author-last=Freedman |author-first=Michael J. |title=The Compression Cache: Virtual Memory Compression for Handheld Computers |url=http://www.cs.princeton.edu/~mfreed//docs/6.033/compression.pdf |date=2000-03-16 |access-date=2015-01-09}}</ref>
issue = 3 |
<ref name="CWORLD-RD2">{{cite book |url=https://books.google.com/books?id=BUaIcc6lsdwC&lpg=PA56 |title=Mac Memory Booster Gets an Upgrade |publisher=ComputerWorld Magazine |date=1996-09-09 |access-date=2015-01-12}}</ref>
pages = 45–52 |
<ref name="PCMAG-HURR-2">{{cite journal |url=https://books.google.com/?id=7WGv1D0tOVYC&lpg=PA48 |title=Hurricane 2.0 Squeezes the Most Memory from Your System |journal=[[PC Magazine]] |date=1996-10-08 |access-date=2015-01-01}}</ref>
doi = 10.1145/122167.122173
<ref name="PCMAG-PERF">{{cite journal |url=https://books.google.com/?id=8RSHdk84u50C&lpg=RA1-PA165 |title=Performance Enhancers |journal=[[PC Magazine]] |date=1997-04-08 |access-date=2015-01-01}}</ref>
}}</ref>
<ref name="SoftRAM">{{cite journal |url=https://books.google.com/?id=XcEKP0ml18EC&lpg=PA34 |title=SoftRAM Under Scruitny |journal=[[PC Magazine]] |date=1996-01-23 |access-date=2015-01-01}}</ref>
 
<ref name="IBM-MXT-PERF">{{cite web |url=http://www.kkant.net/papers/caecw.doc |title=An Evaluation of Memory Compression Alternatives |author-first=Krishna |author-last=Kant |publisher=[[Intel Corporation]] |date=2003-02-01 |access-date=2015-01-01}}</ref>
<ref name="PAT-5559978">
<ref name="IBM-MXT-NEWS">{{cite web |url=http://www-03.ibm.com/press/us/en/pressrelease/1653.wss |title=IBM Research Breakthrough Doubles Computer Memory Capacity |publisher=[[IBM]] |date=2000-06-26 |access-date=2015-01-01}}</ref>
{{ cite patent |
<ref name="IBM-MXT-PAPERS">{{cite web |url=http://researcher.watson.ibm.com/researcher/view_group_pubs.php?grp=2917 |title=Memory eXpansion Technologies |publisher=[[IBM]] |access-date=2015-01-01}}</ref>
country = US |
<ref name="zswap-bench">{{cite web |url = https://events.linuxfoundation.org/sites/events/files/slides/tmc_sjennings_linuxcon2013.pdf |title=Transparent Memory Compression in Linux |author-first=Seth |author-last=Jennings |website=linuxfoundation.org |access-date=2015-01-01}}</ref>
number = 5559978 |
<ref name="zram-google-page">{{cite web |url=https://code.google.com/p/compcache/ |title=CompCache |publisher=Google code |access-date=2015-01-01}}</ref>
status = patent }}</ref>
<ref name="IBM-AIX-AME">{{cite web |url=https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101633 |title=AIX 6.1 Active Memory Expansion |publisher=[[IBM]] |access-date=2015-01-01}}</ref>
 
<ref name="IBM-POWER7+">{{cite web |url=http://www-05.ibm.com/cz/events/febannouncement2012/pdf/power_architecture.pdf |title=IBM Power Systems Hardware Deep Dive |publisher=[[IBM]] |access-date=2015-01-01}}</ref>
<ref name="PAT-5785474">
<ref name="ZRAM-BENCH">{{cite web |url=https://code.google.com/p/compcache/wiki/Performance |title=Performance numbers for compcache |access-date=2015-01-01}}</ref>
{{ cite patent |
<ref name="WIN95-REQ">{{cite web |url=http://support.microsoft.com/kb/138349/en-us |title=Windows 95 Installation Requirements |publisher=[[Microsoft]] |access-date=2015-01-01}}</ref>
country = US |
<ref name="Arstechnica">https://arstechnica.com/apple/2013/10/os-x-10-9/17/#compressed-memory</ref>
number = 5875474 |
<ref name="Willson_Usenix">https://www.usenix.org/legacy/publications/library/proceedings/usenix01/cfp/wilson/wilson_html/acc.html</ref>
status = patent}}</ref>
<ref name="Aul_2015">{{cite web |author-last=Aul |author-first=Gabe |url=http://blogs.windows.com/bloggingwindows/2015/08/18/announcing-windows-10-insider-preview-build-10525/ |title=Announcing Windows 10 Insider Preview Build 10525 |work=Blogging Windows |publisher=[[Microsoft]] |date=2015-08-18 |access-date=2015-08-19}}</ref>
 
<ref name="Paul_1997_NWDOSTIP">{{cite book |title=NWDOS-TIPs &mdash; Tips &amp; Tricks rund um Novell DOS 7, mit Blick auf undokumentierte Details, Bugs und Workarounds |chapter=Kapitel II.18. Mit STACKER Hauptspeicher 'virtuell' verdoppeln... |language=de |trans-title=Tips &amp; tricks for Novell DOS 7, with a focus on undocumented details, bugs and workarounds |work=MPDOSTIP |author-first=Matthias |author-last=Paul |date=1997-07-30 |orig-year=1996-04-14 |edition=3 |version=Release 157 |url=http://www.antonis.de/dos/dos-tuts/mpdostip/html/nwdostip.htm |access-date=2012-01-11 |dead-url=no |archive-url=https://web.archive.org/web/20161105172944/http://www.antonis.de/dos/dos-tuts/mpdostip/html/nwdostip.htm |archive-date=2016-11-05}}</ref>
<ref name="CaseForCompressedCaching">
{{cite conference
| url = https://www.usenix.org/legacy/event/usenix99/full_papers/wilson/wilson.pdf
| title = The Case for Compressed Caching in Virtual Memory Systems
| last = Wilson
| first = Paul R.
| last2 = Kaplan
| first2 = Scott F.
| last3 = Smaragdakis
| first3 = Yannis
| date = June 6-11, 1999
| conference = USENIX Annual Technical Conference
| ___location = Monterey, California, USA
| pages = 101–116
}}
</ref>
 
<ref name="SIMPSON">
{{cite web|
last = Simpson |
first = Matthew |
title = Analysis of Compression Algorithms for Program Data|
year = 2014 |
url = http://www.ece.umd.edu/~barua/matt-compress-tr.pdf |
accessdate = {{date|2015-01-09|mdy}} |
pages = 6 }}</ref>
 
<ref name="RIZZO">
{{cite journal|
last = Rizzo |
first = Luigi |
title = A very fast algorithm for RAM compression |
journal= ACM SIGOPS Operating Systems Review |
year = 1996 |
url = http://dl.acm.org/citation.cfm?id=250012 |
accessdate = {{date|2015-01-09|mdy}} |
page = 8 }}</ref>
 
<ref name="DENNING">
{{cite journal|
last = Denning |
first = Peter J. |
title = Thrashing: Its causes and prevention |
journal= Proceedings AFIPS, Fall Joint Computer Conference |
year = 1968 |
url = http://www.cs.uwaterloo.ca/~brecht/courses/702/Possible-Readings/vm-and-gc/thrashing-denning-afips-1968.pdf |
accessdate = {{date|2015-01-05|mdy}} |
page = 918 |
volume = 33}}</ref>
 
<ref name="FREEDMAN">
{{ cite web|
last = Freedman |
first = Michael J. |
title = The Compression Cache: Virtual Memory Compression for Handheld Computers |
url = http://www.cs.princeton.edu/~mfreed//docs/6.033/compression.pdf |
date = {{date|2000-03-16|mdy}} |
accessdate = {{date|2015-01-09|mdy}}}}</ref>
 
<ref name="CWORLD-RD2">
{{cite book |
url = https://books.google.com/books?id=BUaIcc6lsdwC&lpg=PA56&dq=ram%20doubler%20for%20mac&pg=PA56#v=onepage&q=ram%20doubler%20for%20mac&f=false |
title = Mac Memory Booster Gets an Upgrade |
publisher = ComputerWorld Magazine |
date = {{date|1996-09-09|mdy}} |
accessdate = {{date|2015-01-12|mdy}}}}</ref>
 
<ref name="PCMAG-HURR-2">
{{cite book |
url = https://books.google.com/?id=7WGv1D0tOVYC&lpg=PA48&dq=helix%20software%20RAM%20compression%20license&pg=PA48#v=onepage&q=helix%20software%20RAM%20compression%20license&f=false |
title = Hurricane 2.0 Squeezes the Most Memory from Your System |
publisher = PC Magazine |
date = {{date|1996-10-08|mdy}} |
accessdate = {{date|2015-01-01|mdy}}}}</ref>
 
<ref name="PCMAG-PERF">
{{cite book |
url = https://books.google.com/?id=8RSHdk84u50C&lpg=RA1-PA165&dq=hurricane%20softram%20pc%20magazine&pg=RA1-PA165#v=onepage&q=hurricane%20softram%20pc%20magazine&f=false |
title = Performance Enhancers |
publisher = PC Magazine |
date = {{date|1997-04-08|mdy}} |
accessdate = {{date|2015-01-01|mdy}}}}</ref>
 
<ref name="SoftRAM">
{{cite book |
url = https://books.google.com/?id=XcEKP0ml18EC&lpg=PA34&dq=hurricane%20softram&pg=PA34#v=onepage&q=hurricane%20softram&f=false |
title = SoftRAM Under Scruitny |
publisher = PC Magazine |
date = {{date|1996-01-23|mdy}} |
accessdate = {{date|2015-01-01|mdy}}}}</ref>
 
<ref name="IBM-MXT-PERF">
{{cite web |
url = http://www.kkant.net/papers/caecw.doc |
title = An Evaluation of Memory Compression Alternatives |
publisher = Krishna Kant, Intel Corporation |
date = {{date|2003-02-01|mdy}} |
accessdate = {{date|2015-01-01|mdy}}}}</ref>
 
<ref name="IBM-MXT-NEWS">
{{cite web |
url = http://www-03.ibm.com/press/us/en/pressrelease/1653.wss |
title = IBM Research Breakthrough Doubles Computer Memory Capacity |
publisher = IBM |
date = {{date|2000-06-26|mdy}} |
accessdate = {{date|2015-01-01|mdy}}}}</ref>
 
<ref name="IBM-MXT-PAPERS">
{{cite web |
url = http://researcher.watson.ibm.com/researcher/view_group_pubs.php?grp=2917 |
title = Memory eXpansion Technologies |
publisher = IBM |
accessdate = {{date|2015-01-01|mdy}}}}</ref>
 
<ref name="zswap-bench">
{{cite web |
url = https://events.linuxfoundation.org/sites/events/files/slides/tmc_sjennings_linuxcon2013.pdf |
title = Transparent Memory Compression in Linux |
author = Seth Jennings |
website = linuxfoundation.org |
accessdate = {{date|2015-01-01|mdy}}}}</ref>
 
<ref name="zram-google-page">
{{cite web |
url = https://code.google.com/p/compcache/ |
title = CompCache |
publisher = Google code |
accessdate = {{date|2015-01-01|mdy}}}}</ref>
 
<ref name="IBM-AIX-AME">
{{cite web |
url = https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101633 |
title = AIX 6.1 Active Memory Expansion |
publisher = IBM |
accessdate = {{date|2015-01-01|mdy}}}}</ref>
 
<ref name="IBM-POWER7+">
{{cite web |
url = http://www-05.ibm.com/cz/events/febannouncement2012/pdf/power_architecture.pdf |
title = IBM Power Systems Hardware Deep Dive |
publisher = IBM |
accessdate = {{date|2015-01-01|mdy}}}}</ref>
 
<ref name="ZRAM-BENCH">
{{cite web |
url = https://code.google.com/p/compcache/wiki/Performance |
title = Performance numbers for compcache |
accessdate = {{date|2015-01-01|mdy}}}}</ref>
 
<ref name="WIN95-REQ">
{{cite web |
url = http://support.microsoft.com/kb/138349/en-us |
title = Windows 95 Installation Requirements |
publisher = Microsoft |
accessdate = {{date|2015-01-01|mdy}}}}</ref>
}}
 
{{-}}
 
{{Memory management navbox}}