Virtual memory compression: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Alter: url, journal, title, trans-title. URLs might have been anonymized. Add: date, title, doi, issue, volume, s2cid. Changed bare reference to CS1/2. | Use this bot. Report bugs. | Suggested by BrownHairedGirl | Linked from User:BrownHairedGirl/Articles_with_bare_links | #UCB_webform_linked 1129/2187
Line 51:
First, the programmer has to manually implement conversions
and the additional instructions that quantize and dequantize
values, imposing a programmer’sprogrammer's effort and performance overhead. Second, to cover outliers, the bitwidth of the quantized
values often become greater than or equal to the original
values. Third, the programmer has to use standard bitwidth;
otherwise, extracting non-standard bitwidth (i.e., 1-71–7, 9-159–15, and
17-3117–31) for representing narrow integers exacerbates the overhead
of software-based quantization. A hardware support in the memory hierarchy of
general-purpose processors for quantization can solve these problems. Tha hardware support allows representing
Line 67:
ratio for floating-point values and cache blocks with multiple data
types, and (iii) lower overhead for locating the compressed blocks.<ref name="Quant"/>
 
==History==
Virtual memory compression has gone in and out of favor as a technology. The price and speed of RAM and external storage have plummeted due to [[Moore's Law]] and improved RAM interfaces such as [[DDR3]], thus reducing the need for virtual memory compression, while multi-core processors, server farms, and mobile technology together with the advent of flash based systems make virtual memory compression more attractive.