Content deleted Content added
Added {{Operating system}} navbox |
Section titles cleaned up, no need for capitalizing words other than first |
||
Line 6:
==Benefits==
Virtual memory compression can provide several benefits: performance improvement; an increase in available virtual memory; improved system lifespan.
===Performance
By reducing the I/O activity caused by paging requests, virtual memory compression can produce performance improvement. The degree of performance improvement depends on a variety of factors, including, the availability of any compression co-processors, spare bandwidth on the CPU, speed of the I/O channel, speed of the primary storage, and the compressibility of the contents of primary storage.
===Available virtual memory===
In some situations, such as in embedded devices, auxiliary storage is limited or non-existent. In these cases, virtual memory compression can allow a virtual memory system to operate, where otherwise virtual memory would have to be disabled. This allows the system to run certain software which would otherwise be unable to operate in an environment with no virtual memory.
===Enhanced
[[Flash memory]] has certain endurance limitations on the maximum number of erase cycles it can undergo, which can be as low as 100 erase cycles. In systems where Flash Memory is used as the only auxiliary storage system, implementing virtual memory compression can reduce the total quantity of data being written to auxiliary storage, improving system reliability.
==Shortcomings==
===Low
A primary issue is the degree to which the contents of primary storage can be compressed under real-world loads. Program code and much of the data held in primary storage is often not highly compressible, since efficient programming techniques and data architectures are designed to automatically eliminate redundancy in data sets.
Line 27 ⟶ 25:
In order for virtual memory compression to provide any performance improvement, the throughput of the virtual memory system must be improved vs. the uncompressed equivalent. Thus the overhead of the compression must be lower than the throughput of the paging system without the compression under the same load. This can be difficult to achieve given that in most instances much of the paging I/O happens as a background process, which allows other processes to continue. However, in I/O bound systems or applications, with highly compressible data sets, the gains can be impressive.
===Increased
Less obvious is the fact that the memory used by the compression system reduces the available system memory and thus causes a corresponding increase in over-all paging activity. As more primary storage is used to store compressed data, less primary storage is available to programs, causing the level of paging activity to increase, reducing the effectiveness of the compression system.
Line 34 ⟶ 32:
For example, in order to maximize the use of the primary storage cache of compressed pages, [[Helix Software Company]]’s Hurricane 2.0 provided a user-settable threshold which allowed adjustment of a rejection level for compression. The program would compress the first 256 to 512 bytes of a 4K page and if that small region achieved the designated level of compression, the rest of the page would be compressed and then retained in a primary storage buffer, while all others would be sent to auxiliary storage through the normal paging system. The default setting for this threshold was a compression ratio of 8:1.<ref name="PCMAG-HURR-2"/>
===Price/
In hardware implementations the technology also relies on price differentials between the various components of the system, for example, the difference between the cost of RAM and the cost of a processor dedicated to compression. The relative price/performance differences of the various components tend to vary over time. For example, the addition of a compression co-processor may have minimal impact on the cost of a CPU.
Line 40 ⟶ 38:
In a typical virtual memory implementation, paging happens on a [[least recently used]] basis, potentially causing the compression algorithm to use up CPU cycles dealing with the lowest priority data. Furthermore, program code is usually read-only, and is therefore never paged-out. Instead code is simply discarded, and re-loaded from the program’s auxiliary storage file if needed. In this case the bar for compression is higher, since the I/O cycle it is attempting to eliminate is much shorter, particularly on flash memory devices.
===Better
Other performance enhancements, such as optimizations to the operating systems and on-CPU caching, and improvements in the speed of the I/O channel, both reduce the amount of paging and improve the speed at which paging happens, potentially reducing or eliminating any advantage provided by compression technology.
==History==
Virtual Memory Compression has gone in and out of favor as a technology. The price and speed of RAM and external storage have plummeted due to [[Moore’s Law]] and improved RAM interfaces such as [[DDR3]], thus reducing the need for virtual memory compression, while multi-core processors, server farms, and mobile technology together with the advent of flash based systems make virtual memory compression more attractive.
===
[[Helix Software Company]] pioneered virtual memory compression in 1992, filing a patent application for the process in October of that year.<ref name="PAT-5559978"/> In 1994 and 1995 Helix refined the process using test-compression and secondary memory caches on video cards and other devices.<ref name="PAT-5785474"/> However Helix did not release a product incorporating virtual memory compression until July 1996 with the release of Hurricane 2.0. Hurricane 2.0 used the [[Stac Electronics]] [[Lempel–Ziv–Stac]] compression algorithm and also used off-screen video RAM as a compression buffer to gain performance benefits.<ref name="PCMAG-HURR-2"/>
Line 56 ⟶ 53:
In 1996 IBM began experimenting with compression, and in 2000 IBM announced its Memory eXpansion Technology or MXT.<ref name="IBM-MXT-NEWS"/><ref name="IBM-MXT-PAPERS"/> MXT was a stand-alone chip which acted as a [[CPU cache]] between the CPU and memory controller. MXT had an integrated compression engine which compressed all data heading to/from primary storage. Subsequent testing of the technology by Intel showed 5% - 20% over-all system performance improvement, similar to the results obtained by PC Magazine with Hurricane.<ref name="IBM-MXT-PERF"/>
===Recent
In early 2008, a [[Linux]] project named [[zram]] (originally compcache) was released, and later incorporated into [[ChromeOS]].<ref name="zram-google-page"/>
Line 67 ⟶ 63:
On multi-core, multithreaded CPUs, some benchmarks show performance improvements of over 50% in some circumstances.<ref name="zswap-bench"/><ref name="ZRAM-BENCH"/>
==References==
|