Content deleted Content added
removed orphan note, added cross references in OSX, AIX Zram, Zswap →TO-DO: update categories |
No edit summary |
||
Line 1:
'''Virtual Memory Compression''' is a [[memory management]] technique that utilizes [[data compression]] to reduce the size or number of [[paging]] requests to/from [[auxiliary memory]]. Virtual memory compression is distinct from [[garbage collection]] systems which remove unused memory blocks and in some cases consolidate used memory regions and reduce fragmentation for efficiency. Virtual memory compression is also distinct from [[context switching]] systems, such as Connectix’s RAM Doubler and Apple OS 7.1, in which inactive processes are suspended and then compressed.<ref name="PAT-5559978"/><ref name="PAT-5785474"/>
In a virtual memory compression system, paging requests are compressed and stored in [[primary storage]] (usually [[RAM]]) or sent as compressed to auxiliary storage. In both cases the original memory is marked inaccessible. The memory footprint of the memory being paged is reduced by the compression process; and, In the first instance, the freed memory is returned to the general memory pool, while the compressed portion is kept in RAM; in the second, the compressed data is sent to auxiliary storage but the resulting I/O operation is smaller and thus takes less time. Attempts to access a compressed page result in a reversal of the process—the compressed data is optionally retrieved from
In some implementations such as in [[zswap]], [[zram]] and [[Helix Software Company]]’s Hurricane, the entire process is managed by the operating system. In other systems such as IBM’s MXT, the compression process occurs in a dedicated processor which handles transfers between a local cache and primary storage.
Line 14:
===Available virtual memory===
In some situations, such as in embedded devices,
===Enhanced Lifespan===
[[Flash memory]] has certain endurance limitations on the maximum number of erase cycles it can undergo, which can be as low as 100 erase cycles. In systems where Flash Memory is used as the only
==Shortcomings==
Line 32:
Since the relationship between paging activity and available memory is exponential, any gains in available memory tend to be offset by significant increases in [[thrashing]].<ref name="DENNING"/> Again, the more compressible the data, the more pronounced the performance improvement, because less primary storage is needed to hold the compressed data.
For example, in order to maximize the use of the primary storage cache of compressed pages, [[Helix Software Company]]’s Hurricane 2.0 provided a user-settable threshold which allowed adjustment of a rejection level for compression. The program would compress the first 256 to 512 bytes of a 4K page and if that small region achieved the designated level of compression, the rest of the page would be compressed and then retained in a primary storage buffer, while all others would be sent to
===Price/Performance of Hardware===
Line 38:
===Prioritization===
In a typical virtual memory implementation, paging happens on a [[least recently used]] basis, potentially causing the compression algorithm to use up CPU cycles dealing with the lowest priority data. Furthermore, program code is usually read-only, and is therefore never paged-out. Instead code is simply discarded, and re-loaded from the program’s
===Better Alternatives===
|