Dynamic random-access memory: Difference between revisions

Content deleted Content added
m Bot: http → https
 
(41 intermediate revisions by 22 users not shown)
Line 1:
{{Short description|Type of computer memory}}
{{Use American English|date=June 2025}}
{{Redirect|DRAM||Dram (disambiguation){{!}}Dram}}
{{Hatnote|{{BDprefix|p=b}}}}
 
{{citation style|date=April 2019}}
[[Image:MT4C1024-HD.jpg|thumb|right|upright=1.84|thumb|A [[Die (integrated circuit)|die]] photograph of the [[Micron Technology]] MT4C1024 DRAM [[integrated circuit]] (1994). It has a capacity of 1&nbsp;[[megabit]] equivalent to <math>2^{20}</math>bits or {{nowrap|128 [[KiB]].}}<ref name=mt4acid>{{cite web |access-date=2016-04-02 |date=2012-11-15 |title=How to "open" microchip and what's inside? : ZeptoBars |url=http://zeptobars.com/en/read/how-to-open-microchip-asic-what-inside |quote=Micron MT4C1024 — 1 mebibit (220 bit) dynamic ram. Widely used in 286 and 386-era computers, early 90s. Die size - 8662x3969μm. |url-status=live |archive-url=https://web.archive.org/web/20160314015357/http://zeptobars.com/en/read/how-to-open-microchip-asic-what-inside |archive-date=2016-03-14 }}</ref>]]
{{Memory types}}
 
[[Image:MT4C1024-HD.jpg|thumb|right|upright=1.8|A [[Die (integrated circuit)|die]] photograph of the [[Micron Technology]] MT4C1024 DRAM [[integrated circuit]] (1994). It has a capacity of 1&nbsp;[[megabit]] equivalent to <math>2^{20}</math>bits or {{nowrap|128 [[KiB]].}}<ref name=mt4acid>{{cite web |access-date=2016-04-02 |date=2012-11-15 |title=How to "open" microchip and what's inside? : ZeptoBars |url=http://zeptobars.com/en/read/how-to-open-microchip-asic-what-inside |quote=Micron MT4C1024 — 1 mebibit (220 bit) dynamic ram. Widely used in 286 and 386-era computers, early 90s. Die size - 8662x3969μm. |url-status=live |archive-url=https://web.archive.org/web/20160314015357/http://zeptobars.com/en/read/how-to-open-microchip-asic-what-inside |archive-date=2016-03-14 }}</ref>]]
[[File:NeXTcube motherboard.jpg|thumb|[[Motherboard]] of the [[NeXTcube]] computer, 1990, with 64 MiB main memory DRAM (top left) and 256 KiB of [[Video RAM (dual-ported DRAM)|VRAM]]<ref>{{cite web|url=http://www.nextcomputers.org/NeXTfiles/Docs/Hardware/NeXTServiceManualPages1-160_OCR.pdf |title=NeXTServiceManualPages1-160 |date= |access-date=2022-03-09}}</ref> (lower edge, right of middle)]]
'''Dynamic random-access memory''' ('''dynamic RAM''' or '''DRAM''') is a type of [[random-access memory|random-access]] [[semiconductor memory]] that stores each [[bit]] of data in a [[memory cell (computing)|memory cell]], usually consisting of a tiny [[capacitor]] and a [[transistor]], both typically based on [[metal–oxide–semiconductor]] (MOS) technology. While most DRAM memory cell designs use a capacitor and transistor, some only use two transistors. In the designs where a capacitor is used, the capacitor can either be charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. The [[electric charge]] on the capacitors gradually leaks away; without intervention the data on the capacitor would soon be lost. To prevent this, DRAM requires an external ''[[memory refresh]]'' circuit which periodically rewrites the data in the capacitors, restoring them to their original charge. This refresh process is the defining characteristic of dynamic random-access memory, in contrast to [[static random-access memory]] (SRAM) which does not require data to be refreshed. Unlike [[flash memory]], DRAM is [[volatile memory]] (vs. [[non-volatile memory]]), since it loses its data quickly when power is removed. However, DRAM does exhibit limited [[data remanence]].
 
DRAM typically takes the form of an [[integrated circuit]] chip, which can consist of dozens to billions of DRAM memory cells. DRAM chips are widely used in [[digital electronics]] where low-cost and high-capacity [[computer memory]] is required. One of the largest applications for DRAM is the ''[[main memory]]'' (colloquially called the "RAM") in modern [[computer]]s and [[graphics card]]s (where the "main memory" is called the ''[[Video random access memory|graphics memory]]''). It is also used in many portable devices and [[video game]] consoles. In contrast, SRAM, which is faster and more expensive than DRAM, is typically used where speed is of greater concern than cost and size, such as the [[CPU cache|cache memories]] in [[Central processing unit|processor]]s.
'''Dynamic random-access memory''' ('''dynamic RAM''' or '''DRAM''') is a type of [[random-access memory|random-access]] [[semiconductor memory]] that stores each [[bit]] of data in a [[memory cell (computing)|memory cell]], usually consisting of a tiny [[capacitor]] and a [[transistor]], both typically based on [[metal–oxide–semiconductor]] (MOS) technology. While most DRAM memory cell designs use a capacitor and transistor, some only use two transistors. In the designs where a capacitor is used, the capacitor can either be charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. The [[electric charge]] on the capacitors gradually leaks away; without intervention the data on the capacitor would soon be lost. To prevent this, DRAM requires an external ''[[memory refresh]]'' circuit which periodically rewrites the data in the capacitors, restoring them to their original charge. This refresh process is the defining characteristic of dynamic random-access memory, in contrast to [[static random-access memory]] (SRAM) which does not require data to be refreshed. Unlike [[flash memory]], DRAM is [[volatile memory]] (vs. [[non-volatile memory]]), since it loses its data quickly when power is removed. However, DRAM does exhibit limited [[data remanence]].
 
DRAM typically takes the form of an [[integrated circuit]] chip, which can consist of dozens to billions of DRAM memory cells. DRAM chips are widely used in [[digital electronics]] where low-cost and high-capacity [[computer memory]] is required. One of the largest applications for DRAM is the ''[[main memory]]'' (colloquially called the "RAM") in modern [[computer]]s and [[graphics card]]s (where the "main memory" is called the ''[[Video random access memory|graphics memory]]''). It is also used in many portable devices and [[video game]] consoles. In contrast, SRAM, which is faster and more expensive than DRAM, is typically used where speed is of greater concern than cost and size, such as the [[CPU cache|cache memories]] in [[Central processing unit|processor]]s.
 
The need to refresh DRAM demands more complicated circuitry and timing than SRAM. This complexity is offset by the structural simplicity of DRAM memory cells: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach very high [[Computer storage density|densities]] with a simultaneous reduction in cost per bit. Refreshing the data consumes power, andcausing a variety of techniques areto be used to manage the overall power consumption. For this reason, DRAM usually needs to operate with a [[memory controller]]; the [[memory controller]] needs to know DRAM parameters, especially [[memory timings]], to initialize DRAMs, which may be different depending on different DRAM manufacturers and part numbers.
 
DRAM had a 47% increase in the price-per-bit in 2017, the largest jump in 30 years since the 45% jump in 1988, while in recent years the price has been going down.<ref>{{cite web|url=http://www.icinsights.com/news/bulletins/Are-The-Major-DRAM-Suppliers-Stunting-DRAM-Demand/|title=Are the Major DRAM Suppliers Stunting DRAM Demand?|website=www.icinsights.com|access-date=2018-04-16|url-status=live|archive-url=https://web.archive.org/web/20180416202834/http://www.icinsights.com/news/bulletins/Are-The-Major-DRAM-Suppliers-Stunting-DRAM-Demand/|archive-date=2018-04-16}}</ref> In 2018, a "key characteristic of the DRAM market is that there are currently only three major suppliers — [[Micron Technology]], [[SK Hynix]] and [[Samsung Electronics]]" that are "keeping a pretty tight rein on their capacity".<ref>{{Cite web |last1=EETimes |last2=Hilson |first2=Gary |date=2018-09-20 |title=DRAM Boom and Bust is Business as Usual |url=https://www.eetimes.com/dram-boom-and-bust-is-business-as-usual/ |access-date=2022-08-03 |website=EETimes}}</ref> There is also [[Kioxia]] (previously [[Toshiba]] Memory Corporation after 2017 spin-off) which doesn't manufacture DRAM. Other manufacturers make and sell [[DIMM]]s (but not the DRAM chips in them), such as [[Kingston Technology]], and some manufacturers that sell [[stacked DRAM]] (used e.g. in the fastest [[supercomputer]]s on the [[exascale computing|exascale]]), separately such as [[Viking Technology]]. Others sell such integrated into other products, such as [[Fujitsu]] into its CPUs, AMD in GPUs, and [[Nvidia]], with [[HBM2]] in some of their GPU chips.
Line 23 ⟶ 22:
The [[cryptanalysis|cryptanalytic]] machine code-named ''Aquarius'' used at [[Bletchley Park]] during [[World War II]] incorporated a hard-wired dynamic memory. Paper tape was read and the characters on it "were remembered in a dynamic store." The store used a large bank of capacitors, which were either charged or not, a charged capacitor representing cross (1) and an uncharged capacitor dot (0). Since the charge gradually leaked away, a periodic pulse was applied to top up those still charged (hence the term 'dynamic')".<ref>{{cite book |first1=B. Jack |last1=Copeland |title=Colossus: The secrets of Bletchley Park's code-breaking computers |url=https://books.google.com/books?id=YiiQDwAAQBAJ&pg=PA301 |date=2010 |publisher=Oxford University Press |isbn=978-0-19-157366-8 |page=301}}</ref>
 
In November 1965, [[Toshiba]] introduced a bipolar dynamic RAM for its [[electronic calculator]] Toscal BC-1411.<ref name="toscal">{{cite web|url=http://www.oldcalculatormuseum.com/s-toshbc1411.html|title=Spec Sheet for Toshiba "TOSCAL" BC-1411|website=www.oldcalculatormuseum.com|access-date=8 May 2018|url-status=live|archive-url=https://web.archive.org/web/20170703071307/http://www.oldcalculatormuseum.com/s-toshbc1411.html|archive-date=3 July 2017}}</ref><ref>[{{cite web |url=http://collection.sciencemuseum.org.uk/objects/co8406093/toscal-bc-1411-calculator-with-electronic-calculator |title=Toscal BC-1411 calculator] {{webarchive|archive-url=https://web.archive.org/web/20170729145228/http://collection.sciencemuseum.org.uk/objects/co8406093/toscal-bc-1411-calculator-with-electronic-calculator |archive-date=2017-07-29 }}, |publisher=[[Science Museum, London]]}}</ref><ref>[{{cite web |url=http://www.oldcalculatormuseum.com/toshbc1411.html |title=Toshiba "Toscal" BC-1411 Desktop Calculator] {{webarchive|archive-url=https://web.archive.org/web/20070520202433/http://www.oldcalculatormuseum.com/toshbc1411.html |archive-date=2007-05-20 }}</ref> In 1966, Tomohisa Yoshimaru and Hiroshi Komikawa from Toshiba applied for a Japanese patent of a memory circuit composed of several transistors and a capacitor, in 1967 they applied for a patent in the US.<ref>{{cite web |title=Memory Circuit |url= https://patents.google.com/patent/US3550092A/en?q=(memory+)&assignee=Toshiba+Corp&before=priority:19670101&after=priority:19640101|website=[[Google Patents]] |access-date=18 June 2023}}</ref>
 
The earliest forms of DRAM mentioned above used [[bipolar transistors]]. While it offered improved performance over [[magnetic-core memory]], bipolar DRAM could not compete with the lower price of the then-dominant magnetic-core memory.<ref>{{cite web |title=1966: Semiconductor RAMs Serve High-speed Storage Needs |url=https://www.computerhistory.org/siliconengine/semiconductor-rams-serve-high-speed-storage-needs/ |website=Computer History Museum}}</ref> Capacitors had also been used for earlier memory schemes, such as the drum of the [[Atanasoff–Berry Computer]], the [[Williams tube]] and the [[Selectron tube]].
 
=== Single MOS DRAM ===
In 1966, Dr. [[Robert Dennard]] invented modern DRAM architecture in which there's a single MOS transistor per capacitor,<ref name="ibm100">{{cite web |date=9 August 2017 |title=DRAM |url=https://www.ibm.com/ibm/history/ibm100/us/en/icons/dram/ |access-date=20 September 2019 |website=IBM100 |publisher=[[IBM]]}}</ref> at the [[IBM Thomas J. Watson Research Center]], while he was working on MOS memory and was trying to create an alternative to SRAM which required six MOS transistors for each [[bit]] of data. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of the single-transistor MOS DRAM memory cell.<ref>{{cite web |title=IBM100 — DRAM |url=https://www.ibm.com/ibm/history/ibm100/us/en/icons/dram/ |website=IBM |date=9 August 2017}}</ref> He filed a patent in 1967, and was granted U.S. patent number [https://web.archive.org/web/20151231134927/http://patft1.uspto.gov/netacgi/nph-Parser?patentnumber=3387286 3,387,286] in 1968.<ref>{{cite web |title=Robert Dennard |url=https://www.britannica.com/biography/Robert-Dennard |website=Encyclopedia Britannica|date=September 2023 }}</ref> MOS memory offered higher performance, was cheaper, and consumed less power, than magnetic-core memory.<ref name="computerhistory1970">{{cite web |title=1970: Semiconductors compete with magnetic cores |url=https://www.computerhistory.org/storageengine/semiconductors-compete-with-magnetic-cores/ |website=[[Computer History Museum]]}}</ref> The patent describes the invention: "Each cell is formed, in one embodiment, using a single field-efiiecteffect transistor and a single capacitor."<ref>{{Cite patent|number=US3387286A|title=Field-effect transistor memory|gdate=1968-06-04|invent1=Dennard|inventor1-first=Robert H.|url=https://patents.google.com/patent/US3387286A}}</ref>
 
MOS DRAM chips were commercialized in 1969 by Advanced Memory Systems, Inc of [[Sunnyvale, California|Sunnyvale, CA]]. This 1024 bit chip was sold to [[Honeywell]], [[Raytheon]], [[Wang Laboratories]], and others.
The same year, Honeywell asked [[Intel]] to make a DRAM using a three-transistor cell that they had developed. This became the Intel 1102 in early 1970.<ref>{{cite web|url=http://inventors.about.com/library/weekly/aa100898.htm|archive-url=https://archive.today/20130306105823/http://inventors.about.com/library/weekly/aa100898.htm|url-status=dead|archive-date=March 6, 2013|title=Who Invented the Intel 1103 DRAM Chip?|publisher=ThoughtCo|author=Mary Bellis|date=23 Feb 2018|access-date=27 Feb 2018}}</ref> However, the 1102 had many problems, prompting Intel to begin work on their own improved design, in secrecy to avoid conflict with Honeywell. This became the first commercially available DRAM, the [[Intel 1103]], in October 1970, despite initial problems with low yield until the fifth revision of the [[photomask|mask]]s. The 1103 was designed by Joel Karp and laid out by Pat Earhart. The masks were cut by Barbara Maness and Judy Garcia.<ref>{{cite web |url=http://archive.computerhistory.org/resources/still-image/PENDING/X3665.2007/Semi_SIG/Notes%20from%20interview%20with%20John%20Reed.pdf |title=Archived copy |access-date=2014-01-15 |url-status=dead |archive-url=https://web.archive.org/web/20140116124021/http://archive.computerhistory.org/resources/still-image/PENDING/X3665.2007/Semi_SIG/Notes%20from%20interview%20with%20John%20Reed.pdf |archive-date=2014-01-16 }}</ref>{{original research inline|date=December 2016}} MOS memory overtook magnetic-core memory as the dominant memory technology in the early 1970s.<ref name="computerhistory1970"/>
 
The first DRAM with multiplexed row and column [[address bus|address lines]] was the [[Mostek]] MK4096 4&nbsp;Kbit DRAM designed by Robert Proebsting and introduced in 1973. This addressing scheme uses the same address pins to receive the low half and the high half of the address of the memory cell being referenced, switching between the two halves on alternating bus cycles. This was a radical advance, effectively halving the number of address lines required, which enabled it to fit into packages with fewer pins, a cost advantage that grew with every jump in memory size. The MK4096 proved to be a very robust design for customer applications. At the 16&nbsp;Kbit density, the cost advantage increased; the 16&nbsp;Kbit Mostek MK4116 DRAM,<ref>{{cite web |first=Ken |last=Shirriff |title=Reverse-engineering the classic MK4116 16-kilobit DRAM chip |date=November 2020 |url=httphttps://www.righto.com/2020/11/reverse-engineering-classic-mk4116-16.html}}</ref><ref>{{cite web |first=Robert |last=Proebsting |interviewer=Hendrie, Gardner |title=Oral History of Robert Proebsting |date=14 September 2005 |publisher=Computer History Museum |id=X3274.2006 |url=https://www.cs.utexas.edu/~hunt/class/2016-spring/cs350c/documents/Robert-Proebsting.pdf}}</ref> introduced in 1976, achieved greater than 75% worldwide DRAM market share. However, as density increased to 64&nbsp;Kbit in the early 1980s, Mostek and other US manufacturers were overtaken by Japanese DRAM manufacturers, which dominated the US and worldwide markets during the 1980s and 1990s.
 
Early in 1985, [[Gordon Moore]] decided to withdraw Intel from producing DRAM.<ref>[{{cite web |url=http://www.shmj.or.jp/makimoto/en/pdf/makimoto_E_01_12.pdf "|title=Outbreak of Japan-US Semiconductor War"] {{Webarchive|archive-url=https://web.archive.org/web/20200229223250/http://www.shmj.or.jp/makimoto/en/pdf/makimoto_E_01_12.pdf |archive-date=2020-02-29 }}</ref>
By 1986, many, but not all, United States chip makers had stopped making DRAMs.<ref>{{cite book |first1=William R. |last1=Nester |title=American Industrial Policy: Free or Managed Markets? |url=https://books.google.com/books?id=hCi_DAAAQBAJ |date=2016 |publisher=Springer |isbn=978-1-349-25568-9 |page=115}}
{{cite book |first1=William R. |last1=Nester |title=American Industrial Policy: Free or Managed Markets? |url=https://books.google.com/books?id=hCi_DAAAQBAJ |date=2016 |publisher=Springer |isbn=978-1-349-25568-9 |page=115}}
</ref> Micron Technology and Texas Instruments continued to produce them commercially, and IBM produced them for internal use.
 
Line 45 ⟶ 43:
|newspaper=New York Times
|date=3 August 1985}}
<br/ref><ref>
{{cite news |first1=Donald |last1=Woutat.
|url=https://www.latimes.com/archives/la-xpm-1985-12-04-fi-625-story.html |title=6 Japan Chip Makers Cited for Dumping
|newspaper=Los Angeles Times
|date=4 November 1985}}
<br/ref><ref>
{{cite news |url=https://www.latimes.com/archives/la-xpm-1986-03-14-fi-20761-story.html |title=More Japan Firms Accused: U.S. Contends 5 Companies Dumped Chips
|newspaper=Los Angeles Times
|date=1986}}
<br/ref><ref>
{{cite news |first1=David E. |last1=Sanger
|url=https://www.nytimes.com/1987/11/03/business/japanese-chip-dumping-has-ended-us-finds.html |title=Japanese Chip Dumping Has Ended, U.S. Finds
Line 65 ⟶ 63:
Later, in 2001, Japanese DRAM makers accused Korean DRAM manufacturers of dumping.<ref>
{{cite web |author1=Kuriko Miyake
|url=httphttps://edition.cnn.com/2001/TECH/industry/10/25/chip.dumping.idg/ |title=Japanese chip makers say they suspect dumping by Korean firms |publisher=CNN
|date=2001}}
<br/ref><ref>
{{cite news |url=https://www.itworld.com/article/2794396/japanese-chip-makers-suspect-dumping-by-korean-firms.html |title=Japanese chip makers suspect dumping by Korean firms |newspaper=ITWorld
|date=2001}}
<br/ref><ref>
{{cite web |url=https://www.eetimes.com/dram-pricing-investigation-in-japan-targets-hynix-samsung/ |title=DRAM pricing investigation in Japan targets Hynix, Samsung
|date=2001 |publisher=EETimes }}
<br/ref><ref>
{{cite web |url=https://phys.org/news/2006-01-korean-dram-japan.html |title=Korean DRAM finds itself shut out of Japan |publisher=Phys.org
|date=2006 }}
Line 105 ⟶ 103:
}}</ref>
 
The long horizontal lines connecting each row are known as word-lines. Each column of cells is composed of two bit-lines, each connected to every other storage cell in the column (the illustration to the right does not include this important detail). They are generally known as the "''+"'' and "''"'' bit lines.
 
A [[sense amplifier]] is essentially a pair of cross-connected [[inverter (logic gate)|inverter]]s between the bit-lines. The first inverter is connected with input from the + bit-line and output to the − bit-line. The second inverter's input is from the − bit-line with output to the + bit-line. This results in [[positive feedback]] which stabilizes after one bit-line is fully at its highest voltage and the other bit-line is at the lowest possible voltage.
Line 114 ⟶ 112:
# The precharge circuit is switched off. Because the bit-lines are relatively long, they have enough [[capacitance]] to maintain the precharged voltage for a brief time. This is an example of [[dynamic logic (digital logic)|dynamic logic]].<ref name="Kenner:24,30"/>
# The desired row's word-line is then driven high to connect a cell's storage capacitor to its bit-line. This causes the transistor to conduct, transferring [[Electric charge|charge]] from the storage cell to the connected bit-line (if the stored value is 1) or from the connected bit-line to the storage cell (if the stored value is 0). Since the capacitance of the bit-line is typically much higher than the capacitance of the storage cell, the voltage on the bit-line increases very slightly if the storage cell's capacitor is discharged and decreases very slightly if the storage cell is charged (e.g., 0.54 and 0.45&nbsp;V in the two cases). As the other bit-line holds 0.50&nbsp;V there is a small voltage difference between the two twisted bit-lines.<ref name="Kenner:24,30"/>
# The sense amplifiers are now connected to the bit-lines pairs. Positive feedback then occurs from the cross-connected inverters, thereby amplifying the small voltage difference between the odd and even row bit-lines of a particular column until one bit line is fully at the lowest voltage and the other is at the maximum high voltage. Once this has happened, the row is "''open"'' (the desired cell data is available).<ref name="Kenner:24,30"/>
# All storage cells in the open row are sensed simultaneously, and the sense amplifier outputs latched. A column address then selects which latch bit to connect to the external data bus. Reads of different columns in the same row can be performed without a [[Memory timings|row opening delay]] because, for the open row, all data has already been sensed and latched.<ref name="Kenner:24,30"/>
# While reading of columns in an open row is occurring, current is flowing back up the bit-lines from the output of the sense amplifiers and recharging the storage cells. This reinforces (i.e. "refreshes") the charge in the storage cell by increasing the voltage in the storage capacitor if it was charged to begin with, or by keeping it discharged if it was empty. Note that due to the length of the bit-lines there is a fairly long propagation delay for the charge to be transferred back to the cell's capacitor. This takes significant time past the end of sense amplification, and thus overlaps with one or more column reads.<ref name="Kenner:24,30"/>
# When done with reading all the columns in the current open row, the word-line is switched off to disconnect the storage cell capacitors (the row is "closed") from the bit-lines. The sense amplifier is switched off, and the bit-lines are precharged again.<ref name="Kenner:24,30"/>
 
===To write to memory===
[[File:Square array of mosfet cells write.png|thumb|250px|right|Writing to a DRAM cell]]
 
To store data, a row is opened and a given column's sense amplifier is temporarily forced to the desired high or low -voltage state, thus causing the bit-line to charge or discharge the cell storage capacitor to the desired value. Due to the sense amplifier's positive feedback configuration, it will hold a bit-line at stable voltage even after the forcing voltage is removed. During a write to a particular cell, all the columns in a row are sensed simultaneously just as during reading, so although only a single column's storage-cell capacitor charge is changed, the entire row is refreshed (written back in), as illustrated in the figure to the right.<ref name="Kenner:24,30"/>
 
===Refresh rate===
Line 134 ⟶ 132:
The row address of the row that will be refreshed next is maintained by external logic or a [[Counter (digital)|counter]] within the DRAM. A system that provides the row address (and the refresh command) does so to have greater control over when to refresh and which row to refresh. This is done to minimize conflicts with memory accesses, since such a system has both knowledge of the memory access patterns and the refresh requirements of the DRAM. When the row address is supplied by a counter within the DRAM, the system relinquishes control over which row is refreshed and only provides the refresh command. Some modern DRAMs are capable of self-refresh; no external logic is required to instruct the DRAM to refresh or to provide a row address.
 
Under some conditions, most of the data in DRAM can be recovered even if the DRAM has not been refreshed for several minutes.<ref>[{{cite journal |url=https://www.usenix.org/legacy/event/sec08/tech/full_papers/halderman/halderman_html/ |title=Lest We Remember: Cold Boot Attacks on Encryption Keys] {{webarchive|archive-url=https://web.archive.org/web/20150105103510/https://www.usenix.org/legacy/event/sec08/tech/full_papers/halderman/halderman_html/ |archive-date=2015-01-05 }}, |author=Halderman et|display-authors=etal al,|journal =USENIX Security |date=2008.}}</ref>
 
===Memory timing===
Line 141 ⟶ 139:
 
{|class="wikitable" style="text-align:center;"
|+ Asynchronous DRAM typical timing
|-
!||"50&nbsp;ns"||"60&nbsp;ns"||Description
Line 162 ⟶ 161:
|''t''<sub>CAS</sub>||8&nbsp;ns||10&nbsp;ns||align=left|/CAS low pulse width minimum
|}
Thus, the generally quoted number is the minimum /RAS low to valid data out time. This is the time to open a row, allowingsettle the sense amplifiers, toand settle.deliver Notethe thatselected thecolumn data accessto forthe aoutput. bit in the rowThis is shorter, since that happens as soon asalso the senseminimum amplifier/RAS haslow settledtime, butwhich theincludes DRAM requires additionalthe time to propagatefor the amplified data to be delivered back to recharge the cells. The time to read additional bits from an open page is much less, defined by the /CAS to /CAS cycle time. The quoted number is the clearest way to compare between the performance of different DRAM memories, as it sets the slower limit regardless of the row length or page size. Bigger arrays forcibly result in larger bit line capacitance and longer propagation delays, which cause this time to increase as the sense amplifier settling time is dependent on both the capacitance as well as the propagation latency. This is countered in modern DRAM chips by instead integrating many more complete DRAM arrays within a single chip, to accommodate more capacity without becoming too slow.
 
When such a RAM is accessed by clocked logic, the times are generally rounded up to the nearest clock cycle. For example, when accessed by a 100&nbsp;MHz state machine (i.e. a 10&nbsp;ns clock), the 50&nbsp;ns DRAM can perform the first read in five clock cycles, and additional reads within the same page every two clock cycles. This was generally described as {{nowrap|"5-2-2-2"}} timing, as bursts of four reads within a page were common.
Line 168 ⟶ 167:
When describing synchronous memory, timing is described by clock cycle counts separated by hyphens. These numbers represent {{nowrap|''t''<sub>CL</sub>-''t''<sub>RCD</sub>-''t''<sub>RP</sub>-''t''<sub>RAS</sub>}} in multiples of the DRAM clock cycle time. Note that this is half of the data transfer rate when [[double data rate]] signaling is used. JEDEC standard PC3200 timing is {{nowrap|3-4-4-8}}<ref>{{cite web|title=Corsair CMX1024-3200 (1&nbsp;GByte, two bank unbuffered DDR SDRAM DIMM)|url=http://www.corsairmemory.com/corsair/products/specs/cmx1024-3200.pdf|archive-url=https://web.archive.org/web/20080911032322/http://www.corsairmemory.com/_datasheets/cmx1024-3200.pdf|archive-date=11 September 2008|date=December 2003}}</ref> with a 200&nbsp;MHz clock, while premium-priced high performance PC3200 DDR DRAM DIMM might be operated at {{nowrap|2-2-2-5}} timing.<ref>{{cite web|title=Corsair TWINX1024-3200XL dual-channel memory kit|url=http://www.corsairmemory.com/corsair/products/specs/twinx1024-3200xl.pdf|archive-url=https://web.archive.org/web/20061207112238/http://www.corsairmemory.com/corsair/products/specs/twinx1024-3200xl.pdf|archive-date=7 December 2006|date=May 2004}}</ref>
{|class="wikitable" style="text-align:center;"
|+ Synchronous DRAM typical timing
!rowspan=32 colspan=2| ||colspan=42|PC-3200 (DDR-400)||colspan=42|PC2-6400 (DDR2-800)||colspan=42|PC3-12800 (DDR3-1600)||rowspan=32|Description
|-
!cycles||time||cycles||time||cycles||time||cycles||time||cycles||time||cycles||time
!colspan=2|Typical||colspan=2|Fast||colspan=2|Typical||colspan=2|Fast||colspan=2|Typical||colspan=2|Fast
|-
!rowspan=2|''t''<sub>CL</sub>||Typical
!cycles||time||cycles||time||cycles||time||cycles||time||cycles||time||cycles||time
|3||15&nbsp;ns||5||12.5&nbsp;ns||9||11.25&nbsp;ns
|rowspan=2 align=left|/CAS low to valid data out (equivalent to ''t''<sub>CAC</sub>)
|-
!Fast
|''t''<sub>CL</sub>||3||15&nbsp;ns||2||10&nbsp;ns||5||12.5&nbsp;ns||4||10&nbsp;ns||9||11.25&nbsp;ns||8||10&nbsp;ns||align=left|/CAS low to valid data out (equivalent to ''t''<sub>CAC</sub>)
|2||10&nbsp;ns||4||10&nbsp;ns||8||10&nbsp;ns
|-
!rowspan=2|''t''<sub>RCD</sub>||Typical
|4||20&nbsp;ns||2||10&nbsp;ns||5||12.5&nbsp;ns||4||10&nbsp;ns||9||11.25&nbsp;ns
||8||10&nbsp;ns||rowspan=2 align=left|/RAS low to /CAS low time
|-
!Fast
|''t''<sub>RP</sub>||4||20&nbsp;ns||2||10&nbsp;ns||5||12.5&nbsp;ns||4||10&nbsp;ns||9||11.25&nbsp;ns||8||10&nbsp;ns||align=left|/RAS precharge time (minimum precharge to active time)
|2||10&nbsp;ns||4||10&nbsp;ns||8||10&nbsp;ns
|-
!rowspan=2|''t''<sub>RP</sub>||Typical
|''t''<sub>RAS</sub>||8||40&nbsp;ns||5||25&nbsp;ns||16||40&nbsp;ns||12||30&nbsp;ns||27||33.75&nbsp;ns||24||30&nbsp;ns||align=left|Row active time (minimum active to precharge time)
|4||20&nbsp;ns||5||12.5&nbsp;ns||9||11.25&nbsp;ns
|rowspan=2 align=left|/RAS precharge time (minimum precharge to active time)
|-
!Fast
|2||10&nbsp;ns||4||10&nbsp;ns||8||10&nbsp;ns
|-
!rowspan=2|''t''<sub>RAS</sub>||Typical
|8||40&nbsp;ns||16||40&nbsp;ns||27||33.75&nbsp;ns
|rowspan=2 align=left|Row active time (minimum active to precharge time)
|-
!Fast
|5||25&nbsp;ns||12||30&nbsp;ns||24||30&nbsp;ns
|}
Minimum random access time has improved from ''t''<sub>RAC</sub>&nbsp;=&nbsp;50&nbsp;ns to {{nowrap|1=''t''<sub>RCD</sub> + ''t''<sub>CL</sub> = 22.5&nbsp;ns}}, and even the premium 20&nbsp;ns variety is only 2.5 times betterfaster compared tothan the typical case (~2.22 timesasynchronous better)DRAM. [[CAS latency]] has improved even less, from {{nowrap|1=''t''<sub>CAC</sub> = 13&nbsp;ns}} to 10&nbsp;ns. However, the DDR3 memory does achieve 32 times higher bandwidth; due to internal pipelining and wide data paths, it can output two words every 1.25&nbsp;ns {{gaps|(1|600|u=Mword/s)}}, while the EDO DRAM can output one word per ''t''<sub>PC</sub>&nbsp;=&nbsp;20&nbsp;ns (50&nbsp;Mword/s).
 
====Timing abbreviations====
Line 208 ⟶ 226:
==Memory cell design==
{{See also|Memory cell (computing)}}
Each bit of data in a DRAM is stored as a positive or negative electrical charge in a capacitive structure. The structure providing the capacitance, as well as the transistors that control access to it, is collectively referred to as a ''DRAM cell''. They are the fundamental building block in DRAM arrays. Multiple DRAM memory cell variants exist, but the most commonly used variant in modern DRAMs is the one-transistor, one-capacitor (1T1C) cell. The transistor is used to admit current into the capacitor during writes, and to discharge the capacitor during reads. The access transistor is designed to maximize drive strength and minimize transistor-transistor leakage (Kenner, pgp. &nbsp;34). <!--The design of the memory cell varies by DRAM manufacturer and process.-->
 
The capacitor has two terminals, one of which is connected to its access transistor, and the other to either ground or V<sub>CC</sub>/2. In modern DRAMs, the latter case is more common, since it allows faster operation. In modern DRAMs, a voltage of +V<sub>CC</sub>/2 across the capacitor is required to store a logic one; and a voltage of -&minus;V<sub>CC</sub>/2 across the capacitor is required to store a logic zero. The electrical charge stored in the capacitor is measured in [[coulomb]]s. For a logic one, theresultant charge is: <math display ="inline">Q = \pm {V_{CC} \over 2} \cdot C</math>, where ''Q'' is the charge in coulombs[[coulomb]]s and ''C'' is the capacitance in [[farad]]s. A logic zero has a charge of: <math display="inline">Q = {-V_{CC} \over 2} \cdot C</math>.<ref name="Kenner:22">{{harvnb|Keeth|Baker|Johnson|Lin|2007|p=22}}</ref>
 
Reading or writing a logic one requires the wordline isbe driven to a voltage greater than the sum of V<sub>CC</sub> and the access transistor's threshold voltage (V<sub>TH</sub>). This voltage is called ''V<sub>CC</sub> pumped'' (V<sub>CCP</sub>). The time required to discharge a capacitor thus depends on what logic value is stored in the capacitor. A capacitor containing logic one begins to discharge when the voltage at the access transistor's gate terminal is above V<sub>CCP</sub>. If the capacitor contains a logic zero, it begins to discharge when the gate terminal voltage is above V<sub>TH</sub>.<ref name="Kenner:24">{{harvnb|Keeth|Baker|Johnson|Lin|2007|p=24}}</ref>
 
===Capacitor design===
Up until the mid-1980s, the capacitors in DRAM cells were co-planar with the access transistor (they were constructed on the surface of the substrate), thus they were referred to as ''planar'' capacitors. The drive to increase both density and, to a lesser extent, performance, required denser designs. This was strongly motivated by economics, a major consideration for DRAM devices, especially commodity DRAMs. The minimization of DRAM cell area can produce a denser device and lower the cost per bit of storage. Starting in the mid-1980s, the capacitor was moved above or below the silicon substrate in order to meet these objectives. DRAM cells featuring capacitors above the substrate are referred to as ''stacked'' or ''folded plate'' capacitors. Those with capacitors buried beneath the substrate surface are referred to as ''trench'' capacitors. In the 2000s, manufacturers were sharply divided by the type of capacitor used in their DRAMs and the relative cost and long-term scalability of both designs have been the subject of extensive debate. The majority of DRAMs, from major manufactures such as [[Hynix]], [[Micron Technology]], [[Samsung Electronics]] use the stacked capacitor structure,<!--where a cylindrical and tall capacitor is stacked on top of the transistor--> whereas smaller manufacturers such Nanya Technology use the trench capacitor structure (Jacob, pp.&nbsp;355–357).
 
The capacitor in the stacked capacitor scheme is constructed above the surface of the substrate. The capacitor is constructed from an oxide-nitride-oxide (ONO) dielectric sandwiched in between two layers of polysilicon plates (the top plate is shared by all DRAM cells in an IC), and its shape can be a rectangle, a cylinder, or some other more complex shape. There are two basic variations of the stacked capacitor, based on its ___location relative to the bitline&mdash;capacitor-overunder-bitline (COBCUB) and capacitor-underover-bitline (CUBCOB). In athe former variation, the capacitor is underneath the bitline, which is usually made of metal, and the bitline has a polysilicon contact that extends downwards to connect it to the access transistor's source terminal. In the latter variation, the capacitor is constructed above the bitline, which is almost always made of polysilicon, but is otherwise identical to the COB variation. The advantage the COB variant possesses is the ease of fabricating the contact between the bitline and the access transistor's source as it is physically close to the substrate surface. However, this requires the active area to be laid out at a 45-degree angle when viewed from above, which makes it difficult to ensure that the capacitor contact does not touch the bitline. CUB cells avoid this, but suffer from difficulties in inserting contacts in between bitlines, since the size of features this close to the surface are at or near the minimum feature size of the process technology (Kenner, pp.&nbsp;33–42).
 
The trench capacitor is constructed by etching a deep hole into the silicon substrate. The substrate volume surrounding the hole is then heavily doped to produce a buried n<sup>+</sup> plate andwith to reducelow resistance. A layer of oxide-nitride-oxide dielectric is grown or deposited, and finally the hole is filled by depositing doped polysilicon, which forms the top plate of the capacitor. The top of the capacitor is connected to the access transistor's drain terminal via a polysilicon strap (Kenner, pp.&nbsp;42–44). A trench capacitor's depth-to-width ratio in DRAMs of the mid-2000s can exceed 50:1 (Jacob, p.&nbsp;357).
 
Trench capacitors have numerous advantages. Since the capacitor is buried in the bulk of the substrate instead of lying on its surface, the area it occupies can be minimized to what is required to connect it to the access transistor's drain terminal without decreasing the capacitor's size, and thus capacitance (Jacob, pp.&nbsp;356–357). Alternatively, the capacitance can be increased by etching a deeper hole without any increase to surface area (Kenner, pgp. &nbsp;44). Another advantage of the trench capacitor is that its structure is under the layers of metal interconnect, allowing them to be more easily made planar, which enables it to be integrated in a logic-optimized process technology, which have many levels of interconnect above the substrate. The fact that the capacitor is under the logic means that it is constructed before the transistors are. This allows high-temperature processes to fabricate the capacitors, which would otherwise be degradingdegrade the logic transistors and their performance. This makes trench capacitors suitable for constructing [[embedded DRAM]] (eDRAM) (Jacob, p.&nbsp;357). Disadvantages of trench capacitors are difficulties in reliably constructing the capacitor's structures within deep holes and in connecting the capacitor to the access transistor's drain terminal (Kenner, pgp. &nbsp;44).
 
===Historical cell designs===
First-generation DRAM ICs (those with capacities of 1&nbsp;Kbit), ofsuch whichas the first was thearchetypical [[Intel 1103]], used a three-transistor, one-capacitor (3T1C) DRAM cell with separate read and write circuitry. The write wordline drove a write transistor which connected the capacitor to the write bitline just as in the 1T1C cell, but there was a separate read wordline and read transistor which connected an amplifier transistor to the read bitline. By the second- generation, the requirementdrive to reduce cost by fitting the same amount of bits in a smaller area led to the almost universal adoption of the 1T1C DRAM cell, although a couple of devices with 4 and 16&nbsp;Kbit capacities continued to use the 3T1C cell for performance reasons (Kenner, p.&nbsp;6). These performance advantages included, most significantly, the ability to read the state stored by the capacitor without discharging it, avoiding the need to write back what was read out (non-destructive read). A second performance advantage relates to the 3T1C cell has's separate transistors for reading and writing; the memory controller can exploit this feature to perform atomic read-modify-writes, where a value is read, modified, and then written back as a single, indivisible operation (Jacob, p.&nbsp;459).
 
===Proposed cell designs===
The one-transistor, zero-capacitor (1T, or 1T0C) DRAM cell has been a topic of research since the late-1990s. ''1T DRAM'' is a different way of constructing the basic DRAM memory cell, distinct from the classic one-transistor/one-capacitor (1T/1C) DRAM cell, which is also sometimes referred to as "''1T DRAM"'', particularly in comparison to the 3T and 4T DRAM which it replaced in the 1970s.
 
In 1T DRAM cells, the bit of data is still stored in a capacitive region controlled by a transistor, but this capacitance is no longer provided by a separate capacitor. 1T DRAM is a "capacitorless" bit cell design that stores data using the parasitic body capacitance that is inherent to [[silicon on insulator|silicon on insulator]] (SOI)]] transistors. Considered a nuisance in logic design, this [[floating body effect]] can be used for data storage. This gives 1T DRAM cells the greatest density as well as allowing easier integration with high-performance logic circuits since they are constructed with the same SOI process technologies.<ref>{{cite web |url=https://aes2.org/publications/par/num/ |title=Pro Audio Reference |access-date=2024-08-08}}</ref>
 
Refreshing of cells remains necessary, but unlike with 1T1C DRAM, reads in 1T DRAM are non-destructive; the stored charge causes a detectable shift in the [[threshold voltage]] of the transistor.<ref>{{cite conference|first=Jean-Michel|last=Sallese|title=Principles of the 1T Dynamic Access Memory Concept on SOI|book-titleconference=MOS Modeling and Parameter Extraction Group Meeting|___location=Wroclaw, Poland|date=2002-06-20|url=http://legwww.epfl.ch/ekv/mos-ak/wroclaw/MOS-AK_JMS.pdf|access-date=2007-10-07|url-status=livedead|archive-url=https://web.archive.org/web/20071129114317/http://legwww.epfl.ch/ekv/mos-ak/wroclaw/MOS-AK_JMS.pdf|archive-date=2007-11-29}}</ref> Performance-wise, access times are significantly better than capacitor-based DRAMs, but slightly worse than SRAM. There are several types of 1T DRAMs: the commercialized [[Z-RAM]] from Innovative Silicon, the TTRAM<ref>{{cite book|author1=F. Morishita|title=Proceedings of the IEEE 2005 Custom Integrated Circuits Conference, 2005|display-authors=etal|chapter=A capacitorless twin-transistor random access memory (TTRAM) on SOI|date=21 September 2005|volume=Custom Integrated Circuits Conference 2005|pages=428–431|doi=10.1109/CICC.2005.1568699|isbn=978-0-7803-9023-2|s2cid=14952912}}</ref> from Renesas and the [[A-RAM]] from the [[University of Granada|UGR]]/[[CNRS]] consortium.
 
==Array structures==<!--The RSes for all points in this section: Jacob, pp&nbsp;358–361; Kenner, pp.&nbsp;65&nbsp;75-->
Line 276 ⟶ 294:
}}</ref> The extra memory bits are used to record [[RAM parity|parity]] and to enable missing data to be reconstructed by [[error-correcting code]] (ECC). Parity allows the detection of all single-bit errors (actually, any odd number of wrong bits). The most common error-correcting code, a [[Hamming code#Hamming codes with additional parity (SECDED)|SECDED Hamming code]], allows a single-bit error to be corrected and, in the usual configuration, with an extra parity bit, double-bit errors to be detected.<ref>{{cite web|author1=Mastipuram, Ritesh|author2=Wee, Edwin C|title=Soft errors' impact on system reliability|url=http://www.edn.com/article/CA454636.html|website=EDN|publisher=Cypress Semiconductor|archive-url=https://web.archive.org/web/20070416115228/http://www.edn.com/article/CA454636.html|archive-date=16 April 2007|date=30 September 2004}}</ref>
 
Recent studies give widely varying error rates with over seven orders of magnitude difference, ranging from {{nowrap|10<sup>&minus;10</sup>−10<sup>−17</sup> error/bit·h}}, roughly one bit error, per hour, per gigabyte of memory to one bit error, per century, per gigabyte of memory.<ref name="Borucki1">Borucki, "Comparison of Accelerated DRAM Soft Error Rates Measured at Component and System Level", 46th Annual International Reliability Physics Symposium, Phoenix, 2008, pp. 482–487</ref><ref name="Schroeder1">[[Bianca Schroeder|Schroeder, Bianca]] et al. (2009). [http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf "DRAM errors in the wild: a large-scale field study"] {{webarchive|url=https://web.archive.org/web/20150310193355/http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf |date=2015-03-10 }}. ''Proceedings of the Eleventh International Joint Conference on Measurement and Modeling of Computer Systems'', pp.&nbsp;193–204.</ref><ref name="Xin1">{{cite web|url=http://www.ece.rochester.edu/~xinli/usenix07/|title=A Memory Soft Error Measurement on Production Systems|website=www.ece.rochester.edu|access-date=8 May 2018|url-status=dead|archive-url=https://web.archive.org/web/20170214005146/http://www.ece.rochester.edu/~xinli/usenix07/|archive-date=14 February 2017}}</ref> The Schroeder et al. 2009 study reported a 32% chance that a given computer in their study would suffer from at least one correctable error per year, and provided evidence that most such errors are intermittent hard rather than soft errors and that trace amounts of radioactive material that had gotten into the chip packaging were emitting alpha particles and corrupting the data.<ref>{{cite web |url=https://spectrum.ieee.org/drams-damning-defects-and-how-they-cripple-computers |title=DRAM's Damning Defects—and How They Cripple Computers - IEEE Spectrum |access-date=2015-11-24 |url-status=live |archive-url=https://web.archive.org/web/20151124182515/https://spectrum.ieee.org/computing/hardware/drams-damning-defects-and-how-they-cripple-computers |archive-date=2015-11-24 }}</ref> A 2010 study at the University of Rochester also gave evidence that a substantial fraction of memory errors are intermittent hard errors.<ref>{{cite web|url=httphttps://www.cs.rochester.edu/~kshen/papers/usenix2010-li.pdf|title="A Realistic Evaluation of Memory Hardware Errors and Software System Susceptibility". Usenix Annual Tech Conference 2010|author1=Li, Huang|author2=Shen, Chu|year=2010|url-status=live|archive-url=https://web.archive.org/web/20150515214728/http://www.cs.rochester.edu/%7Ekshen/papers/usenix2010-li.pdf|archive-date=2015-05-15}}</ref> Large scale studies on non-ECC main memory in PCs and laptops suggest that undetected memory errors account for a substantial number of system failures: the 2011 study reported a 1-in-1700 chance per 1.5% of memory tested (extrapolating to an approximately 26% chance for total memory) that a computer would have a memory error every eight months.<ref>{{cite web|url=http://research.microsoft.com/pubs/144888/eurosys84-nightingale.pdf|title=Cycles, cells and platters: an empirical analysis of hardware failures on a million consumer PCs. Proceedings of the sixth conference on Computer systems (EuroSys '11). pp 343-356|year=2011|url-status=live|archive-url=https://web.archive.org/web/20121114111006/http://research.microsoft.com/pubs/144888/eurosys84-nightingale.pdf|archive-date=2012-11-14}}</ref>
 
==Security==
Line 309 ⟶ 327:
 
====Principles of operation====
An asynchronous DRAM chip has power connections, some number of address inputs (typically 12), and a few (typically one or four) bidirectional data lines. There are fourthree main [[active-low]] control signals:
* {{overline|RAS}}, the Row Address Strobe. The address inputs are captured on the falling edge of {{overline|RAS}}, and select a row to open. The row is held open as long as {{overline|RAS}} is low.
* {{overline|CAS}}, the Column Address Strobe. The address inputs are captured on the falling edge of {{overline|CAS}}, and select a column from the currently open row to read or write.
* {{overline|WE}}, Write Enable. This signal determines whether a given falling edge of {{overline|CAS}} is a read (if high) or write (if low). If low, the data inputs are also captured on the falling edge of {{overline|CAS}}. If high, the data outputs are enabled by the falling edge of {{overline|CAS}} and produce valid output after the internal access time.
* {{overline|OE}}, Output Enable. This is an additional signal that controls output to the data I/O pins. The data pins are driven by the DRAM chip if {{overline|RAS}} and {{overline|CAS}} are low, {{overline|WE}} is high, and {{overline|OE}} is low. In many applications, {{overline|OE}} can be permanently connected low (output always enabled), but switching {{overline|OE}} can be useful when connecting multiple memory chips in parallel.
 
This interface provides direct control of internal timing.: Whenwhen {{overline|RAS}} is driven low, a {{overline|CAS}} cycle must not be attempted until the sense amplifiers have sensed the memory state, and {{overline|RAS}} must not be returned high until the storage cells have been refreshed. When {{overline|RAS}} is driven high, it must be held high long enough for precharging to complete.
 
Although the DRAM is asynchronous, the signals are typically generated by a clocked memory controller, which limits their timing to multiples of the controller's clock cycle.
 
For completeness, we mention two other control signals which are not essential to DRAM operation, but are provided for the convenience of systems using DRAM:
* {{overline|CS}}, Chip Select. When this is high, all other inputs are ignored. This makes it easy to build an array of DRAM chips which share the same control signals. Just as DRAM internally uses the word lines to select one row of storage cells connect to the shared bit lines and sense amplifiers, {{overline|CS}} is used to select one row of DRAM chips to connect to the shared control, address, and data lines.
* {{overline|OE}}, Output Enable. This is an additional signal that (if high) inhibits output on the data I/&zwj;O pins, while allowing all other operations to proceed normally. In many applications, {{overline|OE}} can be permanently connected low (output enabled whenever {{overline|CS}}, {{overline|RAS}} and {{overline|CAS}} are low and {{overline|WE}} is high), but in high-speed applications, judicious use of {{overline|OE}} can prevent [[bus contention]] between two DRAM chips connected to the same data lines. For example, it is possible to have two [[interleaved memory]] banks sharing the address and data lines, but each having their own {{overline|RAS}}, {{overline|CAS}}, {{overline|WE}} and {{overline|OE}} connections. The memory controller can begin a read from the second bank while a read from the first bank is in progress, using the two {{overline|OE}} signals to only permit one result to appear on the data bus at a time.<!--There's also the Late Write or [[read–modify–write]] cycle where a read is changed to a write by a falling edge on /WE while /CAS remains low, which requires using /OE to drive the write data on the bus before the falling edge of /WE,<ref name=IBM96/>[https://classes.engineering.wustl.edu/cse260m/images/9/9e/MT4LC4M16R6.pdf] but that's rarely used in the real world.-->
 
=====RAS-only refresh=====
Line 324 ⟶ 345:
The refresh cycles are distributed across the entire refresh interval in such a way that all rows are refreshed within the required interval. To refresh one row of the memory array using {{overline|RAS}} only refresh (ROR), the following steps must occur:
# The row address of the row to be refreshed must be applied at the address input pins.
# {{overline|RAS}} must switch from high to low. {{overline|CAS}} must remain high.<!--Refresh still works if there are /CAS accesses, it's just not "row-only" any more.-->
# At the end of the required amount of time, {{overline|RAS}} must return high.
 
This can be done by supplying a row address and pulsing {{overline|RAS}} low; it is not necessary to perform any {{overline|CAS}} cycles. An external counter is needed to iterate over the row addresses in turn.<ref name=IBM96>{{cite webtech report |type=Application Note |title=Understanding DRAM Operation (Application Note)|url=http://www.ece.cmu.edu/~ece548/localcpy/dramop.pdf|publisher=[[IBM]]|archive-url=https://web.archive.org/web/20170829153054/http://www.ece.cmu.edu/~ece548/localcpy/dramop.pdf|archive-date=29 August 2017|date=December 1996}}</ref> In some designs, the CPU handled RAM refresh,. among these theThe [[Zilog Z80]] is perhaps the best known example, hostingas ait rowhas counteran ininternal arow [[processor register]],counter R, andwhich includingsupplies internalthe timersaddress thatfor woulda periodicallyspecial pollrefresh thecycle rowgenerated atafter Reach andinstruction thenfetch.<!--And incrementdata the valuetransfer in thestring register.instructions, Refreshesand wereduring interleavedHALT, withbut commonthat's instructionsmore likedetail memorythan readswe need here.--><ref>{{cite booktech report |title=Z80 CPU |type=User Manual |url=httphttps://www.zilog.com/docs/z80/um0080.pdf#page=17 |page=3 |id=UM008011-0816 |year=2016}}</ref> In other systems, especially [[home computer]]s, refresh was often handled by the video circuitry as ita oftenside had to read from large areaseffect of memory,its andperiodic performed refreshes as partscan of thesethe operations[[frame buffer]].<ref>{{cite web |url=https://retrocomputing.stackexchange.com/questions/14012/what-is-dram-refresh-and-why-is-the-weird-apple-ii-video-memory-layout-affected |title=What is DRAM refresh and why is the weird Apple II video memory layout affected by it? |date=3 March 2020}}</ref>
 
=====CAS before RAS refresh=====
For convenience, the counter was quickly incorporated into the DRAM chips themselves. If the {{overline|CAS}} line is driven low before {{overline|RAS}} (normally an illegal operation), then the DRAM ignores the address inputs and uses an internal counter to select the row to open.{{r|IBM96|TN-04-30}} This is known as {{overline|CAS}}-before-{{overline|RAS}} (CBR) refresh. This became the standard form of refresh for asynchronous DRAM, and is the only form generally used with SDRAM.
 
=====Hidden refresh=====
Given support of {{overline|CAS}}-before-{{overline|RAS}} refresh, it is possible to deassert {{overline|RAS}} while holding {{overline|CAS}} low to maintain data output. If {{overline|RAS}} is then asserted again, this performs a CBR refresh cycle while the DRAM outputs remain valid. Because data output is not interrupted, this is known as ''hidden refresh''.<ref name=TN-04-30>{{cite tech report |type=Technical Note |title=Various Methods of DRAM Refresh |year=1994 |id=TN-04-30 |publisher=[[Micron Technology]] |url=http://www.downloads.reactivemicro.com/Public/Electronics/DRAM/DRAM%20Refresh.pdf Various Methods of DRAM Refresh] {{webarchive|archive-url=https://web.archive.org/web/20111003001843/http://www.downloads.reactivemicro.com/Public/Electronics/DRAM/DRAM%20Refresh.pdf |archive-date=2011-10-03 |url-status=dead}} Micron Technical Note TN-04-30</ref> Hidden refresh is no faster than a normal read followed by a normal refresh, but does maintain the data output valid during the refresh cycle.
 
====Page mode DRAM====
Line 346 ⟶ 367:
<!-- This section is linked from [[Page mode RAM]] -->
<!-- Change the above redirects if you change the title to this section (section links in redirects are case sensitive) -->
'''Page mode DRAM''' is a minor modification to the first-generation DRAM IC interface which improvedimproves the performance of reads and writes to a row by avoiding the inefficiency of precharging and opening the same row repeatedly to access a different column. In page mode DRAM, after a row wasis opened by holding {{overline|RAS}} low, the row couldcan be kept open, and multiple reads or writes couldcan be performed to any of the columns in the row. Each column access wasis initiated by presenting a column address and asserting {{overline|CAS}} and presenting a column address. For reads, after a delay (''t''<sub>CAC</sub>), valid data would appearappears on the data out pins, which wereare held at high-Z before the appearance of valid data. For writes, the write enable signal and write data would beis presented along with the column address.<ref name="Kenner 13">{{harvnb|Keeth|Baker|Johnson|Lin|2007|p=13}}</ref>
 
Page mode DRAM was in turn later improved with a small modification which further reduced latency. DRAMs with this improvement wereare called '''fast page mode DRAMs''' ('''FPM DRAMs'''). In page mode DRAM, the chip does not capture the column address until {{overline|CAS}} wasis asserted, before theso column addressaccess time (until data out was suppliedvalid) begins when {{overline|CAS}} is asserted. In FPM DRAM, the column address couldcan be supplied while {{overline|CAS}} wasis still deasserted., The column address propagated throughand the main column addressaccess datatime path,(''t''<sub>AA</sub>) butbegins didas notsoon outputas datathe onaddress theis datastable. pins untilThe {{overline|CAS}} wassignal asserted.is only Priorneeded to {{overline|CAS}}enable beingthe asserted,output (the data out pins were held at high-Z. FPMwhile DRAM{{overline|CAS}} reducedwas deasserted), so time from {{overline|CAS}} assertion to data valid (''t''<sub>CAC</sub>) latencyis greatly reduced.<ref name="Kenner 14">{{harvnb|Keeth|Baker|Johnson|Lin|2007|p=14}}</ref> Fast page mode DRAM was introduced in 1986 and was used with the [[Intel 80486]].
 
''Static column'' is a variant of fast page mode in which the column address does not need to be stored inlatched, but rather, the address inputs may be changed with {{overline|CAS}} held low, and the data output will be updated accordingly a few nanoseconds later.<ref name="Kenner 14" />
 
''Nibble mode'' is another variant in which four sequential locations within the row can be accessed with four consecutive pulses of {{overline|CAS}}. The difference from normal page mode is that the address inputs are not used for the second through fourth {{overline|CAS}} edges; theybut are generated internally starting with the address supplied for the first {{overline|CAS}} edge.<ref name="Kenner 14" /> The predictable addresses let the chip prepare the data internally and respond very quickly to the subsequent {{overline|CAS}} pulses.
 
====Extended data out DRAM====
Line 363 ⟶ 384:
[[Image:Pair32mbEDO-DRAMdimms.jpg|thumb|A pair of 32&nbsp;[[Megabyte|MB]] EDO DRAM modules]]
 
Extended data out DRAM (EDO DRAM) was invented and patented in the 1990s by [[Micron Technology]] who then licensed technology to many other memory manufacturers.<ref>{{cite book | author=S. Mueller | title=Upgrading and Repairing Laptops | year=2004 | publisher=Que; Har/Cdr Edition | page=221 | isbn=9780789728005 |url=https://books.google.com/books?id=xCXVGneKwScC}}</ref> EDO RAM, sometimes referred to as ''hyper page mode'' enabled DRAM, is similar to fast page mode DRAM with the additional feature that a new access cycle can be started while keeping the data output of the previous cycle active. This allows a certain amount of overlap in operation (pipelining), allowing somewhat improved performance.<ref name=IBM96b>{{cite tech report |type=Applications Note |title=EDO (Hyper Page Mode)|url=https://www.ardent-tool.com/memory/pdf/edo.pdf |publisher=[[IBM]]|date=6 June 1996|archive-url=https://web.archive.org/web/20211202232211/https://ardent-tool.com/memory/pdf/edo.pdf|archive-date=2021-12-02|quote=a new address can be provided for the next access cycle before completing the current cycle allowing a shorter {{overline|CAS}} pulse width, dramatically decreasing cycle times.}}</ref> It is up to 30% faster than FPM DRAM,<ref>{{cite web|last1=Lin|first1=Albert|title=Memory Grades, the Most Confusing Subject|url=httphttps://www.simmtester.com/pageNews/newsPublicationArticle/showpubnews.asp?num=11|website=Simmtester.com|publisher=CST, Inc.|access-date=1 November 2017|date=20 December 1999|url-status=live|archive-url=https://web.archive.org/web/2017110700593620200812212321/httphttps://www.simmtester.com/pageNews/newsPublicationArticle/showpubnews.asp?num=11|archive-date=72020-08-12|quote=So Novemberfor 2017the same –60 part, EDO DRAM is about 30% faster than FPM DRAM in peak data rate.}}</ref> which it began to replace in 1995 when [[Intel]] introduced the [[Mercury chipset|430FX chipset]] with EDO DRAM support. Irrespective of the performance gains, FPM and EDO SIMMs can be used interchangeably in many (but not all) applications.<ref>{{cite web|last1=Huang|first1=Andrew|title=Bunnie's RAM FAQ|url=http://www.bunniestudios.com/bunnie/dramfaq/DRAMFAQ.html|date=14 September 1996|url-status=live|archive-url=https://web.archive.org/web/20170612210850/http://www.bunniestudios.com/bunnie/dramfaq/DRAMFAQ.html|archive-date=12 June 2017}}</ref><ref>{{cite journal|author1=Cuppu, Vinodh|author2=Jacob, Bruce|author3=Davis, Brian|author4=Mudge, Trevor|title=High-Performance DRAMs in Workstation Environments|journal=IEEE Transactions on Computers|date=November 2001|volume=50|issue=11|pages=1133–1153|url=http://www.bunniestudios.com/bunnie/dramfaq/dram-workstation.pdf|access-date=2 November 2017|doi=10.1109/12.966491|hdl=1903/7456|url-status=live|archive-url=https://web.archive.org/web/20170808082644/http://www.bunniestudios.com/bunnie/dramfaq/dram-workstation.pdf|archive-date=8 August 2017|hdl-access=free}}</ref>
 
To be precise, EDO DRAM begins data output on the falling edge of {{overline|CAS}} but does not stopdisable the output when {{overline|CAS}} rises again. ItInstead, it holds the current output valid (thus extending the data output time) even as the DRAM begins decoding a new column address, until either a new column's data is selected by another {{overline|RASCAS}} isfalling deassertededge, or athe newoutput is switched off by the rising edge of {{overline|CASRAS}}. falling edge(Or, selectsless commonly, a differentchange columnin address{{overline|CS}}, {{overline|OE}}, or {{overline|WE}}.)
 
Single-cycleThis EDOability to start a new access even before the system has received the abilitypreceding column's data made it possible to design memory controllers which could carry out a complete{{overline|CAS}} memoryaccess transaction(in the currently open row) in one clock cycle. Otherwise, eachor sequentialat RAM accessleast within the same page takes two clock cycles instead of three, once the pagepreviously hasrequired been selectedthree. EDO's performance and capabilities createdwere an opportunityable to reducepartially thecompensate immensefor the performance losslost associateddue withto athe lack of an L2 cache in low-cost, commodity PCs. ThisMore wasexpensive notebooks also goodoften forlacked notebooksan dueL2 tocache difficultiesdie withto theirsize limitedand formpower factorlimitations, and batterybenefitted life limitationssimilarly. Additionally,Even for systems ''with'' an L2 cache, the availability of EDO memory improved the average memory latency seen by applications over earlier FPM implementations.
 
Single-cycle EDO DRAM became very popular on video cards towardstoward the end of the 1990s. It was very low cost, yet nearly as efficient for performance as the far more costly VRAM.
 
====Burst EDO DRAM====
Line 380 ⟶ 401:
Synchronous dynamic RAM (SDRAM) significantly revises the asynchronous memory interface, adding a clock (and a clock enable) line. All other signals are received on the rising edge of the clock.
 
The {{overline|RAS}} and {{overline|CAS}} inputs no longer act as strobes, but are instead, along with {{overline|WE}}, part of a 3-bit command controlled by a new active-low strobe, ''chip select'' or {{overline|CS}}:
{| class="wikitable"
|+ SDRAM Command summary
Line 409 ⟶ 430:
|}
 
The {{overline|OE}} line's function is extended to a per-byte "DQM" signal, which controls data input (writes) in addition to data output (reads). This allows DRAM chips to be wider than 8 bits while still supporting byte-granularity writes.
 
Many timing parameters remain under the control of the DRAM controller. For example, a minimum time must elapse between a row being activated and a read or write command. One important parameter must be programmed into the SDRAM chip itself, namely the [[CAS latency]]. This is the number of clock cycles allowed for internal operations between a read command and the first data word appearing on the data bus. The "''Load mode register"'' command is used to transfer this value to the SDRAM chip. Other configurable parameters include the length of read and write bursts, i.e. the number of words transferred per read or write command.
 
The most significant change, and the primary reason that SDRAM has supplanted asynchronous RAM, is the support for multiple internal banks inside the DRAM chip. Using a few bits of "''bank address"'' whichthat accompany each command, a second bank can be activated and begin reading data ''while a read from the first bank is in progress''. By alternating banks, ana single SDRAM device can keep the data bus continuously busy, in a way that asynchronous DRAM cannot.
 
====Single data rate synchronous DRAM====
Line 439 ⟶ 460:
 
====Video DRAM====
{{Main|VRAMDual-ported video RAM}}
 
Video DRAM (VRAM) is a [[dual-ported RAM|dual-ported]] variant of DRAM that was once commonly used to store the frame- buffer in some [[graphics card|graphics adaptorsadapter]]s.
 
===={{Anchor|WRAM}}Window DRAM====
Window DRAM (WRAM) is a variant of VRAM that was once used in graphics adaptorsadapters such as the [[Matrox]] Millennium and [[Rage Pro#3D Rage Pro & Rage IIc|ATI 3D Rage Pro]]. WRAM was designed to perform better and cost less than VRAM. WRAM offered up to 25% greater bandwidth than VRAM and accelerated commonly used graphical operations such as text drawing and block fills.<ref name="wramdef">{{cite web |url=httphttps://www.pcguide.com/ref/video/techWRAM-c.html |title=Window RAM (WRAM) |archive-url=https://web.archive.org/web/20100102101703/http://pcguide.com/ref/video/techWRAM-c.html |archive-date=2010-01-02}}</ref>
 
===={{Anchor|MDRAM}}Multibank DRAM====
Line 451 ⟶ 472:
 
===={{Anchor|SGRAM}}Synchronous graphics RAM====
Synchronous graphics RAM (SGRAM) is a specialized form of SDRAM for graphics adaptorsadapters. It adds functions such as [[bit mask]]ing (writing to a specified bit plane without affecting the others) and block write (filling a block of memory with a single colourcolor). Unlike VRAM and WRAM, SGRAM is single-ported. However, it can open two memory pages at once, which simulates the dual-port nature of other video RAM technologies.
 
====Graphics double data rate SDRAM====
Line 457 ⟶ 478:
[[File:Sapphire Ultimate HD 4670 512MB - Qimonda HYB18H512321BF-10-93577.jpg|alt=|thumb|A 512-MBit [[Qimonda]] GDDR3 SDRAM package]]
[[File:SAMSUNG@QDDR3-SDRAM@256MBit@K5J55323QF-GC16 Stack-DSC01340-DSC01367 - ZS-retouched.jpg|thumb|Inside a Samsung GDDR3 256-MBit package]]
Graphics double data rate SDRAM is a type of specialized [[Double data rate|DDR]] [[Synchronous dynamic random-access memory|SDRAM]] designed to be used as the main memory of [[graphics processing unit]]s (GPUs). GDDR SDRAM is distinct from commodity types of DDR SDRAM such as DDR3, although they share some core technologies. Their primary characteristics are higher clock frequencies for both the DRAM core and I/O interface, which provides greater memory bandwidth for GPUs. As of 20202025, there are seven,eight successive generations of GDDR: [[GDDR2]], [[GDDR3]], [[GDDR4]], [[GDDR5]], [[GDDR5X]], [[GDDR6]], [[GDDR6X]] and [[GDDR6XGDDR7]].
 
==={{Anchor|PSRAM}}Pseudostatic RAM===
Line 463 ⟶ 484:
Pseudostatic RAM (PSRAM or PSDRAM) is dynamic RAM with built-in refresh and address-control circuitry to make it behave similarly to static RAM (SRAM). It combines the high density of DRAM with the ease of use of true SRAM. PSRAM is used in the Apple iPhone and other embedded systems such as XFlar Platform.<ref>{{cite news |first=Patrick |last=Mannion |title=Under the Hood — Update: Apple iPhone 3G exposed |newspaper=EETimes |date=2008-07-12 |url=http://www.eetimes.com/showArticle.jhtml?articleID=209000014#selection-1371.0-1383.10 |archive-url=https://archive.today/20130122004240/http://www.eetimes.com/showArticle.jhtml?articleID=209000014#selection-1371.0-1383.10 |url-status=dead |archive-date=2013-01-22 }}</ref>
 
Some DRAM components have a "''self-refresh mode"''. While this involves much of the same logic that is needed for pseudo-static operation, this mode is often equivalent to a standby mode. It is provided primarily to allow a system to suspend operation of its DRAM controller to save power without losing data stored in DRAM, rather than to allow operation without a separate DRAM controller as is in the case of mentioned PSRAMs.
 
An [[EDRAM|embedded]] variant of PSRAM was sold by MoSys under the name [[1T-SRAM]]. It is a set of small DRAM banks with an SRAM cache in front to make it behave much like a true SRAM. It is used in [[Nintendo]] [[GameCube]] and [[Wii]] video game consoles.
Line 502 ⟶ 523:
==External links==
* {{cite book |url=http://www.eecs.berkeley.edu/~culler/courses/cs252-s05/lectures/cs252s05-lec01-intro.ppt#359,15,Memory%20Capacity%20%20(Single%20Chip%20DRAM |first1=David |last1=Culler |chapter=Memory Capacity (Single Chip DRAM) |page=15 |title=EECS 252 Graduate Computer Architecture: Lecture 1 |publisher=Electrical Engineering and Computer Sciences,University of California, Berkeley |year=2005}} Logarithmic graph 1980–2003 showing size and cycle time.
* [httphttps://www-1.ibm.com/servers/eserver/pseries/campaigns/chipkill.pdf Benefits of Chipkill-Correct ECC for PC Server Main Memory] — A 1997 discussion of SDRAM reliability—some interesting information on "soft errors" from [[cosmic ray]]s, especially with respect to [[error-correcting code]] schemes
* [http://www.tezzaron.com/about/papers/soft_errors_1_1_secure.pdf Tezzaron Semiconductor Soft Error White Paper] 1994 literature review of memory error rate measurements.
* {{cite web |url=http://www.nepp.nasa.gov/docuploads/40D7D6C9-D5AA-40FC-829DC2F6A71B02E9/Scal-00.pdf |title=Scaling and Technology Issues for Soft Error Rates |first1=A. |last1=Johnston |work=4th Annual Research Conference on Reliability Stanford University |date=October 2000|url-status=dead |archive-url=https://web.archive.org/web/20041103124422/http://www.nepp.nasa.gov/docuploads/40D7D6C9-D5AA-40FC-829DC2F6A71B02E9/Scal-00.pdf |archive-date=2004-11-03 }}
* {{cite journal |url=http://www.research.ibm.com/journal/rd/462/mandelman.html |title=Challenges and future directions for the scaling of dynamic random-access memory (DRAM) |date=2002 |doi=10.1147/rd.462.0187|archive-url=https://web.archive.org/web/20050322211513/http://www.research.ibm.com/journal/rd/462/mandelman.html|archive-date=2005-03-22|last1=Mandelman |first1=J. A. |last2=Dennard |first2=R. H. |last3=Bronner |first3=G. B. |last4=Debrosse |first4=J. K. |last5=Divakaruni |first5=R. |last6=Li |first6=Y. |last7=Radens |first7=C. J. |journal=IBM Journal of Research and Development |volume=46 |issue=2.3 |pages=187–212 }}
* [https://arstechnica.com/paedia/r/ram_guide/ram_guide.part1-2.html Ars Technica: RAM Guide]
* {{cite thesis|first1=David Tawei |last1=Wang|title=Modern DRAM Memory Systems: Performance Analysis and a High Performance, Power-Constrained DRAM-Scheduling Algorithm|type=PhD |publisher=University of Maryland, College Park|year=2005|url=httphttps://www.ece.umd.edu/~blj/papers/thesis-PhD-wang--DRAM.pdf|access-date=2007-03-10 |hdl=1903/2432}} A detailed description of current DRAM technology.
* [httphttps://www.cs.berkeley.edu/~pattrsn/294 Multi-port Cache DRAM — '''MP-RAM''']
* {{cite web |url=https://lwn.net/Articles/250967/ |title=What every programmer should know about memory |first1=Ulrich |last1=Drepper |year=2007}}