Content deleted Content added
Create Page |
Revised a bit |
||
Line 5:
In some implementations such as in [[zswap]], [[zram]] and [[Helix Software Company]]’s Hurricane, the entire process is managed by the operating system. In other systems such as IBM’s MXT, the compression process occurs in a dedicated processor which handles transfers between a local cache and primary storage.
==
Virtual memory compression can provide several benefits: performance improvement; an increase in available virtual memory; improved system lifespan.
===Performance improvement===
By reducing the I/O activity caused by paging requests, virtual memory compression can produce performance improvement. The degree of performance improvement depends on a variety of factors, including, the availability of any compression co-processors, spare bandwidth on the CPU, speed of the I/O channel, speed of the primary storage, and the compressibility of the contents of primary storage.
In order for virtual memory compression to provide any performance improvement, the throughput of the virtual memory system must be improved vs. the uncompressed equivalent. Thus the overhead of the compression must be lower than throughput of the paging system without the compression. This can be difficult to achieve given that in most instances much of the paging I/O happens as a background, non-blocking process. However, in I/O bound systems or applications, with highly compressible data sets, the gains can be impressive.▼
Less obvious is the fact that the memory used by the compression system reduces the available system memory and thus causes a corresponding increase in over-all paging activity. As more primary storage is used to store compressed data, less primary storage is available to programs, causing the level of paging activity over-all to increase, reducing the effectiveness of the system. Again, in this case the more compressible the data, the more pronounced the performance improvement, because less primary storage is needed to hold the compressed data.▼
[[Helix Software Company]]’s Hurricane 2.0 provided a user-settable threshold which allowed adjustment of a rejection level for compression. Paging requests which achieved the designated compressibility were retained in a primary storage buffer, while all others would be sent to secondary storage through the normal paging system. The default setting for this threshold was a compression ratio of 8:1.<ref name="PCMAG-HURR-2"/>▼
===Available virtual memory===
Line 25 ⟶ 20:
==Shortcomings==
▲In order for virtual memory compression to provide any performance improvement, the throughput of the virtual memory system must be improved vs. the uncompressed equivalent. Thus the overhead of the compression must be lower than throughput of the paging system without the compression under the same load. This can be difficult to achieve given that in most instances much of the paging I/O happens as a background, non-blocking process. However, in I/O bound systems or applications, with highly compressible data sets, the gains can be impressive.
Virtual Memory Compression has gone in and out of favor as a technology. While the price and speed of RAM and external storage have plummeted due to [[Moore’s Law]] and improved RAM interfaces such as [[DDR3]], thus reducing the need for virtual memory compression, while multi-core processors, server farms, and mobile technology together with the advent of flash based systems make the virtual memory more attractive. ▼
▲Less obvious is the fact that the memory used by the compression system reduces the available system memory and thus causes a corresponding increase in over-all paging activity. As more primary storage is used to store compressed data, less primary storage is available to programs, causing the level of paging activity
Virtual memory compression relies on the speed difference between compression and I/O, as well as the difference in size between compressed and uncompressed data. In hardware implementations the technology relies on price differentials between the various components of the system, for example, the difference between the cost of RAM and the cost of a processor dedicated to compression. The relative price/performance differences of the various components tend to vary over time. For example, the addition of a compression co-processor may have minimal impact on the cost of a CPU.▼
Other performance enhancements, such as optimizations to the operating systems and on-CPU caching, and improvements in the speed of the I/O channel, both reduce the amount of paging and improve the speed at which paging happens, potentially reducing or eliminating any advantage provided by compression technology.▼
▲For example, in order to maximize the use of the primary storage cache of compressed pages, [[Helix Software Company]]’s Hurricane 2.0 provided a user-settable threshold which allowed adjustment of a rejection level for compression.
▲Another issue is the low compressibility of the contents of primary storage. Program code and much of the data held in primary storage is often not highly compressible, since efficient programming techniques and data architectures are designed to automatically eliminate redundancy in data sets.
▲
▲Other performance enhancements, such as optimizations to the operating systems and on-CPU caching, and improvements in the speed of the I/O channel, both reduce the amount of paging and improve the speed at which paging happens, potentially reducing or eliminating any advantage provided by compression technology.
==History==
▲Virtual Memory Compression has gone in and out of favor as a technology.
===Origin===
[[Helix Software Company]] pioneered virtual memory compression in 1992, filing a patent application for the process in October of that year<ref name="PAT-5559978"/>. In 1994 and 1995 Helix refined the process using test-compression and secondary memory caches on video cards and other devices <ref name="PAT-5785474"/>. However Helix did not release a product incorporating virtual memory compression until July 1996 with the release of Hurricane 2.0. Hurricane 2.0 used the [[Stac Electronics]] [[Lempel–Ziv–Stac]] compression algorithm and also used off-screen video RAM as a compression buffer to gain performance benefits.<ref name="PCMAG-HURR-2"/>
In 1995, RAM cost nearly $ In its April 8, 1997 issue, PC Magazine published a comprehensive test of the performance enhancement claims of several software virtual memory compression tools.
In 1996 IBM began experimenting with compression, and in 2000 IBM announced its Memory eXpansion Technology or MXT.<ref name="IBM-MXT-NEWS"/><ref name="IBM-MXT-PAPERS"/> MXT was a stand-alone chip which acted as a [[CPU cache]] between the CPU and memory controller. MXT had an integrated compression engine which compressed all data heading to/from primary storage. Subsequent testing of the technology by Intel showed 5% - 20% over-all system performance improvement, similar to the results obtained by PC Magazine with Hurricane.<ref name="IBM-MXT-PERF"/>
|