Virtual memory compression

This is an old revision of this page, as edited by ScholarWarrior (talk | contribs) at 00:53, 5 January 2015 (add osx reference and additional info to shortcomings). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Virtual Memory Compression is a memory management technique that utilizes data compression to reduce the size or number of paging requests to/from auxiliary memory. Virtual memory compression is distinct from garbage collection systems which remove unused memory blocks and in some cases consolidate used memory regions and reduce fragmentation for efficiency. Virtual memory compression is also distinct from context switching systems, such as Connectix’s RAM Doubler and Apple OS 7.1, in which inactive processes are suspended and then compressed. [1][2]

In a virtual memory compression system, paging requests are compressed and stored in primary storage (usually RAM) or sent as compressed to auxiliary storage. In both cases the original memory is marked inaccessible. The memory footprint of the memory being paged is reduced by the compression process; and, In the first instance, the freed memory is returned to the general memory pool, while the compressed portion is kept in RAM; in the second, the compressed data is sent to auxiliary storage but the resulting I/O operation is smaller and thus takes less time. Attempts to access a compressed page result in a reversal of the process -- the compressed data is optionally retrieved from secondary storage, and then decompressed.

In some implementations such as in zswap, zram and Helix Software Company’s Hurricane, the entire process is managed by the operating system. In other systems such as IBM’s MXT, the compression process occurs in a dedicated processor which handles transfers between a local cache and primary storage.

Benefits

Virtual memory compression can provide several benefits: performance improvement; an increase in available virtual memory; improved system lifespan.

Performance improvement

By reducing the I/O activity caused by paging requests, virtual memory compression can produce performance improvement. The degree of performance improvement depends on a variety of factors, including, the availability of any compression co-processors, spare bandwidth on the CPU, speed of the I/O channel, speed of the primary storage, and the compressibility of the contents of primary storage.

Available virtual memory

In some situations, such as in embedded devices, secondary storage is limited or non-existent. In these cases, virtual memory compression can allow a virtual memory system to operate, where otherwise virtual memory would have to be disabled. This allows the system to run certain software which would otherwise be unable to operate in an environment with no virtual memory.

Enhanced Lifespan

Flash memory has certain endurance limitations on the maximum number of erase cycles it can undergo, which can be as low as 100 erase cycles. In systems where Flash Memory is used as the only secondary storage system, implementing virtual memory compression can reduce the total quantity of data being written to secondary storage, improving system reliability.

Shortcomings

A primary issue is the degree to which the contents of primary storage can be compressed under real-world loads. Program code and much of the data held in primary storage is often not highly compressible, since efficient programming techniques and data architectures are designed to automatically eliminate redundancy in data sets.

In order for virtual memory compression to provide any performance improvement, the throughput of the virtual memory system must be improved vs. the uncompressed equivalent. Thus the overhead of the compression must be lower than throughput of the paging system without the compression under the same load. This can be difficult to achieve given that in most instances much of the paging I/O happens as a background, non-blocking process. However, in I/O bound systems or applications, with highly compressible data sets, the gains can be impressive.

Less obvious is the fact that the memory used by the compression system reduces the available system memory and thus causes a corresponding increase in over-all paging activity. As more primary storage is used to store compressed data, less primary storage is available to programs, causing the level of paging activity to increase, reducing the effectiveness of the system. Again, the more compressible the data, the more pronounced the performance improvement, because less primary storage is needed to hold the compressed data.

For example, in order to maximize the use of the primary storage cache of compressed pages, Helix Software Company’s Hurricane 2.0 provided a user-settable threshold which allowed adjustment of a rejection level for compression. The program would compress the first 256 to 512 bytes of a 4K page and if that small region achieved the designated level of compression, the rest of the page would be compressed and then retained in a primary storage buffer, while all others would be sent to secondary storage through the normal paging system. The default setting for this threshold was a compression ratio of 8:1.[3]

In hardware implementations the technology also relies on price differentials between the various components of the system, for example, the difference between the cost of RAM and the cost of a processor dedicated to compression. The relative price/performance differences of the various components tend to vary over time. For example, the addition of a compression co-processor may have minimal impact on the cost of a CPU.

Other performance enhancements, such as optimizations to the operating systems and on-CPU caching, and improvements in the speed of the I/O channel, both reduce the amount of paging and improve the speed at which paging happens, potentially reducing or eliminating any advantage provided by compression technology.

In a typical virtual memory implementations, paging happens on a least recently used basis, potentially causing the compression algorithm to use up CPU cycles dealing with the lowest priority data. Furthermore, program code is usually read-only, and is therefore never paged-out. Instead code is simply discarded, and re-loaded from the program’s secondary storage file if needed. In this case the bar for compression is higher, since the I/O cycle it is attempting to eliminate is much shorter, particularly on flash memory devices.

History

Virtual Memory Compression has gone in and out of favor as a technology. The price and speed of RAM and external storage have plummeted due to Moore’s Law and improved RAM interfaces such as DDR3, thus reducing the need for virtual memory compression, while multi-core processors, server farms, and mobile technology together with the advent of flash based systems make virtual memory compression more attractive.

Origin

Helix Software Company pioneered virtual memory compression in 1992, filing a patent application for the process in October of that year[1]. In 1994 and 1995 Helix refined the process using test-compression and secondary memory caches on video cards and other devices [2]. However Helix did not release a product incorporating virtual memory compression until July 1996 with the release of Hurricane 2.0. Hurricane 2.0 used the Stac Electronics Lempel–Ziv–Stac compression algorithm and also used off-screen video RAM as a compression buffer to gain performance benefits.[3]

In 1995, RAM cost nearly $50/Megabyte, and Microsoft's Windows 95 listed a minimum requirement of 4 Megabytes.[4] Due to the high memory requirement, several programs were released which claimed to use compression technology to gain “memory.” Most notorious was the SoftRAM program from Syncronys Softcorp. SoftRAM was revealed to be “placebo software” which did not include any compression technology at all.[5] Other products, including RAMDoubler and MagnaRAM, included virtual memory compression, but implemented only Run-length encoding, with poor results, giving the technology a negative reputation.[6]

In its April 8, 1997 issue, PC Magazine published a comprehensive test of the performance enhancement claims of several software virtual memory compression tools. In its testing PC Magazine found a minimal (5% over-all) performance improvement from the use of Hurricane, and none at all from any of the other packages.[6] However the tests were run on Intel Pentium systems which had a single core and were single threaded, and thus compression directly impacted all system activity.

In 1996 IBM began experimenting with compression, and in 2000 IBM announced its Memory eXpansion Technology or MXT.[7][8] MXT was a stand-alone chip which acted as a CPU cache between the CPU and memory controller. MXT had an integrated compression engine which compressed all data heading to/from primary storage. Subsequent testing of the technology by Intel showed 5% - 20% over-all system performance improvement, similar to the results obtained by PC Magazine with Hurricane.[9]

Recent Developments

In early 2008, a Linux project named zram (originally compcache) was released, and later incorporated into ChromeOS. [10]

In 2010, IBM released Active Memory Expansion (AME) for AIX 6.1 which implements Virtual Memory Compression.[11]

In 2012 some versions of the POWER7+ chip included data compression support for Virtual Memory Compression.[12] In December 2012 the zswap project was announced and later added to the Linux Kernel. In June 2013, Apple announced that it will include virtual memory compression in OS X Mavericks. On multi-core, multithreaded CPUs, some benchmarks show performance improvements of over 50% in some circumstances.[13][14]

References

  1. ^ a b US patent 5559978 
  2. ^ a b US patent 5875474 
  3. ^ a b Hurricane 2.0 Squeezes the Most Memory from Your System. PC Magazine. 8 Oct 1996. Retrieved 1 Jan 2015.
  4. ^ "Windows 95 Installation Requirements". Microsoft. Retrieved 1 Jan 2015.
  5. ^ SoftRAM Under Scruitny. PC Magazine. 23 Jan 1996. Retrieved 1 Jan 2015.
  6. ^ a b Performance Enhancers. PC Magazine. 8 April 1997. Retrieved 1 Jan 2015.
  7. ^ "IBM Research Breakthrough Doubles Computer Memory Capacity". IBM. 26 Jun 2000. Retrieved 1 Jan 2015.
  8. ^ "Memory eXpansion Technologies". IBM. Retrieved 1 Jan 2015.
  9. ^ "An Evaluation of Memory Compression Alternatives". Krishna Kant, Intel Corporation. 1 Feb 2003. Retrieved 1 Jan 2015.
  10. ^ "CompCache". Google code. Retrieved 1 Jan 2015.
  11. ^ "AIX 6.1 Active Memory Expansion". IBM. Retrieved 1 Jan 2015.
  12. ^ "IBM Power Systems Hardware Deep Dive" (PDF). IBM. Retrieved 1 Jan 2015.
  13. ^ "Transparent Memory Compression in Linux" (PDF). Seth Jennings, IBM. Retrieved 1 Jan 2015.
  14. ^ "Performance numbers for compcache". Retrieved 1 Jan 2015.