[[Intel]]<ref name="Lucchesi">{{Cita web|url=http://www.silvertonconsulting.com/newsletterd/SSDf_drives.pdf |titolo=SSD Flash drives enter the enterprise |autore=Lucchesi, Ray |data=September 2008 |editore=Silverton Consulting |accesso=18 giugno 2010}}</ref> e [[Western Digital|SiliconSystems]] (acquisita da [[Western Digital]] nel 2009)<ref name="Zsolt_Silicon_Systems">{{Cita web|url=http://www.storagesearch.com/siliconsystems.html |titolo=Western Digital Solid State Storage - formerly SiliconSystems |autore=Kerekes, Zsolt |editore=ACSL |accesso=19 giugno 2010}}</ref> usano il termine ''write amplification'' nella loro documentazione e nelle loro pubblicazioni già dal 2008. La ''write amplification'' è misurata tipicamente dal rapporto fra il numero di scritture realmente eseguite sulla flash e il numero di richieste di scrittura richieste dall'host. Senza algoritmi di compressione, la ''write amplification'' non può scendere sotto al valore di 1. Usando algoritmi di compressione, [[SandForce]] dichiara di raggiungere valori tipici di write amplification pari a 0.5,<ref name="Anand_WA">{{Cita web|url=http://www.anandtech.com/show/2899 |titolo=OCZ's Vertex 2 Pro Preview: The Fastest MLC SSD We've Ever Tested |autore=Shimpi, Anand Lal |data=31 dicembre 2009 |editore=AnandTech |accesso=16 giugno 2011}}</ref> con picchi che possono scendere fino a 0.14 con il controller SF-2281.<ref>{{Cita web|url= http://www.tomshardware.com/reviews/ssd-520-sandforce-review-benchmark,3124-11.html |titolo= Intel SSD 520 Review: SandForce's Technology: Very Low Write Amplification |sito=Tomshardware|data=6 febbraio 2012 |nome= Andrew |cognome=Ku|accesso=10 febbraio 2012 }}</ref>
<!-- Parte da tradurre
== Calculating the value ==
Write amplification was always present in SSDs before the term was defined, but it was in 2008 that both Intel<ref name="Lucchesi" /><ref>{{Cite web |url=http://www.extremetech.com/computing/80622-intel-x25-80gb-solidstate-drive-review |title=Intel X25 80GB Solid-State Drive Review |last=Case |first=Loyd |date=2008-09-08 |accessdate=2011-07-28}}</ref> and SiliconSystems started using the term in their papers and publications.<ref name="Zsolt_Silicon_Systems" /> All SSDs have a write amplification value and it is based on both what is currently being written and what was previously written to the SSD. In order to accurately measure the value for a specific SSD, the selected test should be run for enough time to ensure the drive has reached a [[steady state]] condition.<ref name="K Smith" />
A simple formula to calculate the write amplification of an SSD is:<ref name="IBM_WA" /><ref name="Zsolt_WA">{{cite web |url=http://www.storagesearch.com/ssd-jargon.html |title=Flash SSD Jargon Explained |author=Kerekes, Zsolt |date= |work= |publisher=ACSL |accessdate=2010-05-31}}</ref><ref name="OCZ_WA" /><ref>{{cite web |url=http://www.intel.com/cd/channel/reseller/asmo-na/eng/products/nand/feature/index.htm |title=Intel Solid State Drives |author= |date= |work= |publisher=Intel |accessdate=2010-05-31}}</ref>
: <math>\frac{\text{data written to the flash memory}}{\text{data written by the host}} = \text{write amplification}</math>
== Factors affecting the value ==
Many factors affect the write amplification of an SSD. The table below lists the primary factors and how they affect the write amplification. For factors that are variable, the table notes if it has a ''direct'' relationship or an ''inverse'' relationship. For example, as the amount of over-provisioning increases, the write amplification decreases (inverse relationship). If the factor is a toggle (''enabled'' or ''disabled'') function then it has either a ''positive'' or ''negative'' relationship.<ref name="IBM_WA" /><ref name="IBM_Hu_Haas" /><ref name="Zsolt_WA" />
{| class="wikitable sortable"
|+ Write Amplification Factors
|-
! Factor
! Description
! Type
! Relationship*
|-
| [[Write amplification#Garbage collection|Garbage collection]]
| The efficiency of the algorithm used to pick the next best block to erase and rewrite
| style="background: lightblue;" | Variable
| style="background: lightgreen;" | Inverse (good)
|-
| [[Write amplification#Over-provisioning|Over-provisioning]]
| The percentage of physical capacity which is allocated to the SSD controller
| style="background: Lightblue;" | Variable
| style="background: Lightgreen;" | Inverse (good)
|-
| [[TRIM]] command for SATA or UNMAP for SCSI
| These commands must be sent by the operating system (OS) which tells the storage device which sectors contain invalid data. SSDs consuming these commands can then reclaim the pages containing these sectors as free space when the blocks containing these pages are erased instead of copying the invalid data to clean pages.
| style="background: Wheat;" | Toggle
| style="background: Lightgreen;" | Positive (good)
|-
| [[Write amplification#Free user space|Free user space]]
| The percentage of the user capacity free of actual user data; requires TRIM, otherwise the SSD gains no benefit from any free user capacity
| style="background: Lightblue;" | Variable
| style="background: Lightgreen;" | Inverse (good)
|-
| [[Write amplification#Secure erase|Secure erase]]
| Erases all user data and related metadata which resets the SSD to the initial out-of-box performance (until garbage collection resumes)
| style="background: Wheat;" | Toggle
| style="background: Lightgreen;" | Positive (good)
|-
| [[Write amplification#Wear leveling|Wear leveling]]
| The efficiency of the algorithm that ensures every block is written an equal number of times to all other blocks as evenly as possible
| style="background: Lightblue;" | Variable
| style="background: LightCoral;" | Direct (bad)
|-
| [[Write amplification#Separating static and dynamic data|Separating static and dynamic data]]
| Grouping data based on how often it tends to change
| style="background: Wheat;" | Toggle
| style="background: Lightgreen;" | Positive (good)
|-
| [[Write amplification#Sequential writes|Sequential writes]]
| In theory, sequential writes have a write amplification of 1, but other factors will still affect the value
| style="background: Wheat;" | Toggle
| style="background: Lightgreen;" | Positive (good)
|-
| [[Write amplification#Random writes|Random writes]]
| Writing to non-sequential LBAs will have the greatest impact on write amplification
| style="background: Wheat;" | Toggle
| style="background: LightCoral;" | Negative (bad)
|-
| [[Data compression]] which includes [[data deduplication]]
| Write amplification goes down and SSD speed goes up when data compression and deduplication eliminates more redundant data.
| style="background: Lightblue;" | Variable
| style="background: Lightgreen;" | Inverse (good)
|-
| Using [[Multi-level cell|MLC]] NAND in [[Multi-level cell#Single-level cell|SLC]] mode
| This writes data at a rate of one bit per cell instead of the designed number of bits per cell (normally two bits per cell) to speed up reads and writes. If capacity limits of the NAND in SLC mode are approached, the SSD must rewrite the oldest data written in SLC mode into MLC or TLC mode to allow space in the SLC mode NAND to be erased in order to accept more data. However, this approach can reduce wear by keeping frequently-changed pages in SLC mode to avoid programming these changes in MLC or TLC mode, because writing in MLC or TLC mode does more damage to the flash than writing in SLC mode. Therefore, this approach drives up write amplification but could reduce wear when writing patterns target frequently-written pages. However, sequential- and random-write patterns will aggravate the damage because there are no or few frequently-written pages that could be contained in the SLC area, forcing old data to need to be constantly be rewritten to MLC or TLC from the SLC area.
| style="background: Wheat;" | Toggle
| style="background: LightCoral;" | Negative (bad)
|}
Defragmenting an SSD has a negative impact on the WA. (Source needed.)
{| class="wikitable"
|+ *Relationship Definitions
|-
! Type
! Relationship modified
! Description
|-
| rowspan="2" style="background: Lightblue;" | Variable
| style="background: LightCoral;" |Direct
| style="background: LightCoral;" |As the factor increases the WA increases
|-
| style="background: Lightgreen;" |Inverse
| style="background: Lightgreen;" |As the factor increases the WA decreases
|-
| rowspan="2" style="background: wheat;" | Toggle
| style="background: Lightgreen;" |Positive
| style="background: Lightgreen;" |When the factor is present the WA decreases
|-
| style="background: LightCoral;" |Negative
| style="background: LightCoral;" |When the factor is present the WA increases
|}
{{clear}}
== {{Anchor|GC}}Garbage collection ==
{{details|Garbage collection (computer science)}}
[[File:Garbage Collection.png|right|thumb|600px|alt=Pages are written into blocks until they become full. Then the pages with current data are moved to a new block and the old block is erased.|Pages are written into blocks until they become full. Then the pages with current data are moved to a new block and the old block is erased.<ref name="L Smith" />]]
Data is written to the flash memory in units called pages (made up of multiple cells). However, the memory can only be erased in larger units called blocks (made up of multiple pages).<ref name="L Smith" /> If the data in some of the pages of the block are no longer needed (also called stale pages), only the pages with good data in that block are read and rewritten into another previously erased empty block.<ref name="K Smith" /> Then the free pages left by not moving the stale data are available for new data. This is a process called ''[[Garbage collection (computer science)|garbage collection]]'' (GC).<ref name="IBM_WA" /><ref name="OCZ_WA">{{cite web |url=http://www.oczenterprise.com/whitepapers/ssds-write-amplification-trim-and-gc.pdf |title=SSDs - Write Amplification, TRIM and GC |author= |date= |work= |publisher=OCZ Technology |accessdate=2012-11-13}}</ref> All SSDs include some level of garbage collection, but they may differ in when and how fast they perform the process.<ref name="OCZ_WA" /> Garbage collection is a big part of write amplification on the SSD.<ref name="IBM_WA" /><ref name="OCZ_WA" />
Reads do not require an erase of the flash memory, so they are not generally associated with write amplification. In the limited chance of a [[read disturb]] error, the data in that block is read and rewritten, but this would not have any material impact on the write amplification of the drive.<ref>{{cite web |url=http://download.micron.com/pdf/technotes/nand/tn2917.pdf |title=TN-29-17: NAND Flash Design and Use Considerations |author= |year=2006 |work= |publisher=Micron |accessdate=2010-06-02}}</ref>
=== {{Anchor|BG-GC}}Background garbage collection ===
The process of garbage collection involves reading and rewriting data to the flash memory. This means that a new write from the host will first require a read of the whole block, a write of the parts of the block which still include valid data, and then a write of the new data. This can significantly reduce the performance of the system.<ref name="Mehling_Garbage">{{cite web |url=http://www.enterprisestorageforum.com/technology/features/article.php/3850436/Solid-State-Drives-Take-Out-the-Garbage.htm |title=Solid State Drives Take Out the Garbage |author=Mehling, Herman |publisher=Enterprise Storage Forum|date=2009-12-01 |accessdate=2010-06-18}}</ref> Some SSD controllers implement ''background garbage collection'' (BGC), sometimes called ''idle garbage collection'' or ''idle-time garbage collection'' (ITGC), where the controller uses [[Idle (CPU)|idle]] time to consolidate blocks of flash memory before the host needs to write new data. This enables the performance of the device to remain high.<ref name="Conley" />
If the controller were to background garbage collect all of the spare blocks before it was absolutely necessary, new data written from the host could be written without having to move any data in advance, letting the performance operate at its peak speed. The trade-off is that some of those blocks of data are actually not needed by the host and will eventually be deleted, but the OS did not tell the controller this information. The result is that the soon-to-be-deleted data is rewritten to another ___location in the flash memory, increasing the write amplification. In some of the SSDs from [[OCZ]] the background garbage collection only clears up a small number of blocks then stops, thereby limiting the amount of excessive writes.<ref name="OCZ_WA" /> Another solution is to have an efficient garbage collection system which can perform the necessary moves in parallel with the host writes. This solution is more effective in high write environments where the SSD is rarely idle.<ref name="Layton">{{cite web |url=http://www.linux-mag.com/id/7590/2/ |title=Anatomy of SSDs |author=Layton, Jeffrey B. |publisher=Linux Magazine |date=2009-10-27 |accessdate=2010-06-19}}</ref> The [[SandForce]] SSD controllers<ref name="Mehling_Garbage" /> and the systems from [[Violin Memory]] have this capability.<ref name="Zsolt_WA" />
=== Filesystem-aware garbage collection ===
In 2010, some manufacturers (notably Samsung) introduced SSD controllers that extended the concept of BGC to analyze the [[file system]] used on the SSD, to identify recently deleted files and [[Disk partitioning|unpartitioned space]]. The manufacturer claimed that this would ensure that even systems (operating systems and SATA controller hardware) which do not support [[TRIM]] could achieve similar performance. The operation of the Samsung implementation appeared to assume and require an [[NTFS]] file system.<ref name="Bell_Garbage">{{cite web |url=http://www.jdfsl.org/subscriptions/JDFSL-V5N3-Bell.pdf|title=Solid State Drives: The Beginning of the End for Current Practice in Digital Forensic Recovery? |author=Bell, Graeme B. |publisher=Journal of Digital Forensics, Security and Law |year=2010 |accessdate=2012-04-02}}</ref> It is not clear if this feature is still available in currently shipping SSDs from these manufacturers. Systematic data corruption has been reported on these drives if they are not formatted properly using [[Master boot record|MBR]] and NTFS.<ref name="AT_SSDCorrupt">{{cite web |url=http://forums.anandtech.com/archive/index.php/t-2228064.html |title=SSDs are incompatible with GPT partitioning?!}}</ref>
== Over-provisioning ==
[[File:Over-provisioning on an SSD.png|500px|right|thumb|alt=The three levels of over-provisioning found on an SSD|The three levels of over-provisioning found on an SSD.<ref name="Mehling_Garbage" /><ref name="Jim_Bagley" />]]
Over-provisioning (sometimes spelled as OP, over provisioning, or overprovisioning) is the difference between the physical capacity of the flash memory and the logical capacity presented through the [[operating system]] (OS) as available for the user. During the garbage collection, wear-leveling, and bad block mapping operations on the SSD, the additional space from over-provisioning helps lower the write amplification when the controller writes to the flash memory.<ref name="Lucchesi" /><ref name="Jim_Bagley">{{cite web |url=http://www.plianttechnology.com/pdf/SSG-NOW_SSD_Flash_Bulletin_July_2009.pdf |title=Over-provisioning: a winning strategy or a retreat? |page=2 |author=Bagley, Jim |publisher=StorageStrategies Now |date=2009-07-01 |accessdate=2010-06-19}}</ref><ref name="Drossel">{{cite web |url=http://www.snia.org/events/storage-developer2009/presentations/wednesday/GaryDrossel_Methodologies_SSD_Usable_Life.pdf |title=Methodologies for Calculating SSD Useable Life |author=Drossel, Gary |publisher=Storage Developer Conference, 2009 |date=2009-09-14 |accessdate=2010-06-20}}</ref><ref name="Smith_2012">{{cite web |url=http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2012/20120822_TE21_Smith.pdf |title=Understanding SSD Over-provisioning |last=Smith |first=Kent |date=2011-08-01 |page=14 |publisher=flashmemorysummit.com |accessdate=2012-12-03}}</ref>
The first level of over-provisioning comes from the computation of the capacity and the use of units of [[gigabyte]] (GB) instead of [[gibibyte]] (GiB). Both HDD and SSD vendors use the term GB to represent a ''decimal'' GB or 1,000,000,000 (10^9) bytes. Flash memory (like most other electronic storage) is assembled in powers of two, so calculating the physical capacity of an SSD would be based on 1,073,741,824 (2<sup>30</sup>) per ''binary'' GB. The difference between these two values is 7.37% (=(2<sup>30</sup>-10<sup>9</sup>)/10<sup>9</sup> × 100%). Therefore a 128 GB SSD with 0% over-provisioning would provide 128,000,000,000 bytes to the user. This initial 7.37% is typically not counted in the total over-provisioning number.<ref name="Jim_Bagley" /><ref name="Smith_2012" />
The second level of over-provisioning comes from the manufacturer. This level of over-provisioning is typically 0%, 7%, or 28% based on the difference between the decimal gigabyte of the physical capacity and the decimal gigabyte of the available space to the user. As an example, a manufacturer might publish a specification for their SSD at 100 GB, 120 GB or 128 GB based on 128 GB of possible capacity. This difference is 28%, 7% and 0% respectively and is the basis for the manufacturer claiming they have 28% of over-provisioning on their drive. This does not count the additional 7.37% of capacity available from the difference between the decimal and binary gigabyte.<ref name="Jim_Bagley" /><ref name="Smith_2012" />
The third level of over-provisioning comes from known free space on the drive, gaining endurance and performance at the expense of reporting unused portions, and/or at the expense of current or future capacity. This free space can be identified by the operating system using the TRIM command. Alternately, some SSDs provide a utility that permit the end user to select additional over-provisioning. Furthermore, if any SSD is set up with an overall partitioning layout smaller than 100% of the available space, that unpartitioned space will be automatically used by the SSD as over-provisioning as well.<ref name="Smith_2012" /> Yet another source of over-provisioning is operating system minimum free space limits; some operating systems maintain a certain minimum free space per drive, particularly on the boot or main drive. If this additional space can be identified by the SSD, perhaps through continuous usage of the TRIM command, then this acts as semi-permanent over-provisioning. Over-provisioning often takes away from user capacity, either temporarily or permanently, but it gives back reduced write amplification, increased endurance, and increased performance.<ref name="Layton" /><ref name="Drossel" /><ref name="Anand_Spare_Area">{{cite web |url=http://www.anandtech.com/show/3690/the-impact-of-spare-area-on-sandforce-more-capacity-at-no-performance-loss |title=The Impact of Spare Area on SandForce, More Capacity At No Performance Loss? |page=2 |author=Shimpi, Anand Lal |publisher=AnandTech.com |date=2010-05-03 |accessdate=2010-06-19}}</ref><ref>{{cite web | url=http://www.storagereview.com/intel_ssd_520_enterprise_review |title=Intel SSD 520 Enterprise Review |quote=20% over-provisioning adds substantial performance in all profiles with write activity| first=Kevin|last= OBrien | work= Storage Review | date = 2012-02-06 | accessdate=2012-11-29 }}</ref><ref>{{cite web | url=http://cache-www.intel.com/cd/00/00/45/95/459555_459555.pdf | archiveurl=http://www.matrix44.net/cms/wp-content/uploads/2011/07/intel_over_provisioning.pdf |archivedate=2011| title=White Paper: Over-Provisioning an Intel SSD | publisher=Intel | format=PDF | year = 2010 | accessdate=2012-11-29 |deadurl=yes}}</ref><blockquote>
{| cellpadding="12" border="1" align="center"
|+ '''Over-provisioning calculation'''
|-
| <math>\frac{\text{physical capacity}-\text{user capacity}}{\text{user capacity}} = \text{over-provision}</math>
|}
</blockquote>
{{clear}}
== TRIM ==
{{details|Trim (computing)}}
[[Trim (computing)|TRIM]] (which, as a side note, is not an acronym) is a SATA command that enables the operating system to tell an SSD what blocks of previously saved data are no longer needed as a result of file deletions or using the format command. When an LBA is replaced by the OS, as with an overwrite of a file, the SSD knows that the original LBA can be marked as stale or invalid and it will not save those blocks during garbage collection. If the user or operating system erases a file (not just remove parts of it), the file will typically be marked for deletion, but the actual contents on the disk are never actually erased. Because of this, the SSD does not know the LBAs that the file previously occupied can be erased, so the SSD will keep garbage collecting them.<ref name="Christiansen">{{cite web |url=http://www.snia.org/events/storage-developer2009/presentations/thursday/NealChristiansen_ATA_TrimDeleteNotification_Windows7.pdf |title=ATA Trim/Delete Notification Support in Windows 7 |author=Christiansen, Neal |publisher=Storage Developer Conference, 2009 |date=2009-09-14 |accessdate=2010-06-20}}</ref><ref name="SSD_Improv">{{cite web |url=http://www.anandtech.com/show/2865 |title=The SSD Improv: Intel & Indilinx get TRIM, Kingston Brings Intel Down to $115 |author=Shimpi, Anand Lal |publisher=AnandTech.com |date=2009-11-17 |accessdate=2010-06-20}}</ref><ref name="Mehling_TRIM">{{cite web |url=http://www.enterprisestorageforum.com/technology/features/article.php/3861181/Solid-State-Drives-Get-Faster-with-TRIM.htm |title=Solid State Drives Get Faster with TRIM |author=Mehling, Herman |publisher=Enterprise Storage Forum|date=2010-01-27 |accessdate=2010-06-20}}</ref>
The introduction of the TRIM command resolves this problem for operating systems which [[TRIM#Operating system and SSD support|support]] it like [[Features new to Windows 7#Solid state drives|Windows 7]],<ref name="SSD_Improv" /> Mac OS (latest releases of Snow Leopard, Lion, and Mountain Lion, patched in some cases),<ref>{{cite web|url=http://osxdaily.com/2012/01/03/enable-trim-all-ssd-mac-os-x-lion/ |title=Enable TRIM for All SSD’s [sic] in Mac OS X Lion |publisher=osxdaily.com |date=2012-01-03 |accessdate=2012-08-14}}</ref> [[FreeBSD]] since 8.1,<ref>https://www.freebsd.org/releases/8.1R/relnotes-detailed.html#DISKS</ref> and [[Linux]] since 2.6.33.<ref>{{cite web |url=http://kernelnewbies.org/Linux_2_6_33#head-b9b8a40358aaef60a61fcf12e9055900709a1cfb |title=Linux 2 6 33 Features |publisher=kernelnewbies.org |date=2010-02-04 |accessdate=2010-07-23}}</ref> When a file is permanently deleted or the drive is formatted, the OS sends the TRIM command along with the LBAs that no longer contain valid data. This informs the SSD that the LBAs in use can be erased and reused. This reduces the LBAs needing to be moved during garbage collection. The result is the SSD will have more free space enabling lower write amplification and higher performance.<ref name="Christiansen" /><ref name="SSD_Improv" /><ref name="Mehling_TRIM" />
=== Limitations and dependencies ===
The TRIM command also needs the support of the SSD. If the [[firmware]] in the SSD does not have support for the TRIM command, the LBAs received with the TRIM command will not be marked as invalid and the drive will continue to garbage collect the data assuming it is still valid. Only when the OS saves new data into those LBAs will the SSD know to mark the original LBA as invalid.<ref name="Mehling_TRIM" /> SSD Manufacturers that did not originally build TRIM support into their drives can either offer a firmware upgrade to the user, or provide a separate utility that extracts the information on the invalid data from the OS and separately TRIMs the SSD. The benefit would only be realized after each run of that utility by the user. The user could set up that utility to run periodically in the background as an automatically scheduled task.<ref name="Mehling_Garbage" />
Just because an SSD supports the TRIM command does not necessarily mean it will be able to perform at top speed immediately after a TRIM command. The space which is freed up after the TRIM command may be at random locations spread throughout the SSD. It will take a number of passes of writing data and garbage collecting before those spaces are consolidated to show improved performance.<ref name="Mehling_TRIM" />
Even after the OS and SSD are configured to support the TRIM command, other conditions might prevent any benefit from TRIM. {{As of|2010|1|27|alt=As of early 2010}}, databases and RAID systems are not yet TRIM-aware and consequently will not know how to pass that information on to the SSD. In those cases the SSD will continue to save and garbage collect those blocks until the OS uses those LBAs for new writes.<ref name="Mehling_TRIM" />
The actual benefit of the TRIM command depends upon the free user space on the SSD. If the user capacity on the SSD was 100 GB and the user actually saved 95 GB of data to the drive, any TRIM operation would not add more than 5 GB of free space for garbage collection and wear leveling. In those situations, increasing the amount of over-provisioning by 5 GB would allow the SSD to have more consistent performance because it would always have the additional 5 GB of additional free space without having to wait for the TRIM command to come from the OS.<ref name="Mehling_TRIM" />
== Free user space ==
The SSD controller will use any free blocks on the SSD for garbage collection and wear leveling. The portion of the user capacity which is free from user data (either already TRIMed or never written in the first place) will look the same as over-provisioning space (until the user saves new data to the SSD). If the user only saves data consuming 1/2 of the total user capacity of the drive, the other half of the user capacity will look like additional over-provisioning (as long as the TRIM command is supported in the system).<ref name="Mehling_TRIM" /><ref name="AnandTech_Anthology_9">{{cite web |url=http://www.anandtech.com/print/2738 |title=The SSD Anthology: Understanding SSDs and New Drives from OCZ |author=Shimpi, Anand Lal |publisher=AnandTech.com |page=9 |date=2009-03-18 |accessdate=2010-06-20}}</ref>
== Secure erase ==
{{details|Data remanence|Secure erase}}
The ATA Secure Erase command is designed to remove all user data from a drive. With an SSD without integrated encryption, this command will put the drive back to its original out-of-box state. This will initially restore its performance to the highest possible level and the best (lowest number) possible write amplification, but as soon as the drive starts garbage collecting again the performance and write amplification will start returning to the former levels.<ref name="AnandTech_Anthology_11">{{cite web |url=http://www.anandtech.com/print/2738 |title=The SSD Anthology: Understanding SSDs and New Drives from OCZ |author=Shimpi, Anand Lal |publisher=AnandTech.com |page=11 |date=2009-03-18 |accessdate=2010-06-20}}</ref><ref name="Malventano">{{cite web |url=http://www.pcper.com/article.php?aid=669&type=expert&pid=6 |title=Long-term performance analysis of Intel Mainstream SSDs |author=Malventano, Allyn |publisher=PC Perspective |date=2009-02-13 |accessdate=2010-06-20}}</ref> Many tools use the ATA Secure Erase command to reset the drive and provide a user interface as well. One free tool that is commonly referenced in the industry is called [[HDDErase]].<ref name="Malventano" /><ref name="HDDERASE">{{cite web |url=http://cmrr.ucsd.edu/people/Hughes/SecureErase.shtml |title=CMRR - Secure Erase |publisher=CMRR |accessdate=2010-06-21}}</ref> [[Gparted]] and [[Ubuntu (operating system)|Ubuntu]] live CDs provide a bootable Linux system of disk utilities including secure erase.<ref>{{cite web
|author = OCZ Technology
|title = How to Secure Erase Your OCZ SSD Using a Bootable Linux CD
|url = http://www.ocztechnology.com/blog/?p=367
|accessdate = 2014-12-13
|archiveurl = http://web.archive.org/web/20120107194502/http://www.ocztechnology.com/blog/?p=367
|archivedate = 2012-01-07
|date = 2011-09-07
}}</ref>
Drives which encrypt all writes on the fly ''can'' implement ATA Secure Erase in another way. They simply [[zeroize]] and generate a new random encryption key each time a secure erase is done. In this way the old data cannot be read anymore, as it cannot be decrypted.<ref>{{cite web |url=http://www.anandtech.com/show/4244/intel-ssd-320-review/2 |title=The Intel SSD 320 Review: 25nm G3 is Finally Here |publisher=anandtech |accessdate=2011-06-29}}</ref> Some drives with an integrated encryption may require a TRIM command be sent to the drive to put the drive back to its original out-of-box state.<ref>{{cite web |url=http://www.thomas-krenn.com/de/wiki/SSD_Secure_Erase#Ziele_eines_Secure_Erase |title=SSD Secure Erase - Ziele eines Secure Erase |publisher=Thomas-Krenn.AG |accessdate=2011-09-28}}</ref>
==Wear leveling ==
{{details|Wear leveling}}
If a particular block was programmed and erased repeatedly without writing to any other blocks, that block would wear out before all the other blocks — thereby prematurely ending the life of the SSD. For this reason, SSD controllers use a technique called [[wear leveling]] to distribute writes as evenly as possible across all the flash blocks in the SSD.
In a perfect scenario, this would enable every block to be written to its maximum life so they all fail at the same time. Unfortunately, the process to evenly distribute writes requires data previously written and not changing (cold data) to be moved, so that data which are changing more frequently (hot data) can be written into those blocks. Each time data are relocated without being changed by the host system, this increases the write amplification and thus reduces the life of the flash memory. The key is to find an optimum algorithm which maximizes them both.<ref name="Li-Pin Chang">{{cite web |url=http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.103.4903 |title=On Efficient Wear Leveling for Large Scale Flash Memory Storage Systems |author=Chang, Li-Pin |date=2007-03-11 |work= |publisher=National ChiaoTung University, HsinChu, Taiwan | id = {{citeseerx|10.1.1.103.4903}} |accessdate=2010-05-31}}</ref>
{{clear}}
== Separating static and dynamic data ==
The separation of static and dynamic data to reduce write amplification is not a simple process for the SSD controller. The process requires the SSD controller to separate the LBAs with data which is constantly changing and requiring rewriting (dynamic data) from the LBAs with data which rarely changes and does not require any rewrites (static data). If the data is mixed in the same blocks, as with almost all systems today, any rewrites will require the SSD controller to garbage collect both the dynamic data (which caused the rewrite initially) and static data (which did not require any rewrite). Any garbage collection of data that would not have otherwise required moving will increase write amplification. Therefore separating the data will enable static data to stay at rest and if it never gets rewritten it will have the lowest possible write amplification for that data. The drawback to this process is that somehow the SSD controller must still find a way to wear level the static data because those blocks that never change will not get a chance to be written to their maximum P/E cycles.<ref name="IBM_WA" />
== {{Anchor|REMF}}Sequential writes ==
When an SSD is writing data sequentially, the write amplification is equal to one meaning there is no write amplification. The reason is as the data is written, the entire block is filled sequentially with data related to the same file. If the OS determines that file is to be replaced or deleted, the entire block can be marked as invalid, and there is no need to read parts of it to garbage collect and rewrite into another block. It will only need to be erased, which is much easier and faster than the ''read-erase-modify-write'' process needed for randomly written data going through garbage collection.<ref name="IBM_Hu_Haas" />
== Random writes ==
The peak random write performance on an SSD is driven by plenty of free blocks after the SSD is completely garbage collected, secure erased, 100% TRIMed, or newly installed. The maximum speed will depend upon the number of parallel flash channels connected to the SSD controller, the efficiency of the firmware, and the speed of the flash memory in writing to a page. During this phase the write amplification will be the best it can ever be for random writes and will be approaching one. Once the blocks are all written once, garbage collection will begin and the performance will be gated by the speed and efficiency of that process. Write amplification in this phase will increase to the highest levels the drive will experience.<ref name="IBM_Hu_Haas" />
== Impact on performance ==
The overall performance of an SSD is dependent upon a number of factors, including write amplification. Writing to a flash memory device takes longer than reading from it.<ref name="Conley">{{cite web |url=http://blog.corsair.com/?p=3044 |title=Corsair Force Series SSDs: Putting a Damper on Write Amplification |author=Conley, Kevin |publisher=Corsair.com |date=2010-05-27 |accessdate=2010-06-18}}</ref> An SSD generally uses multiple flash memory components connected in parallel to increase performance. If the SSD has a high write amplification, the controller will be required to write that many more times to the flash memory. This requires even more time to write the data from the host. An SSD with a low write amplification will not need to write as much data and can therefore be finished writing sooner than a drive with a high write amplification.<ref name="IBM_WA" /><ref name="IBM_Perf" />
== Product statements ==
In September 2008, [[Intel]] announced the X25-M SATA SSD with a reported WA as low as 1.1.<ref name="Anand_WA" /><ref>{{cite web |url=http://www.intel.com/pressroom/archive/releases/2008/20080908comp.htm |title=Intel Introduces Solid-State Drives for Notebook and Desktop Computers |author= |date=2008-09-08 |work= |publisher=Intel |accessdate=2010-05-31}}</ref> In April 2009, [[SandForce]] announced the SF-1000 SSD Processor family with a reported WA of 0.5 which appears to come from some form of data compression.<ref name="Anand_WA" /><ref>{{cite web |url=http://www.sandforce.com/userfiles/file/downloads/SFI_Launch_PR_Final.pdf |title=SandForce SSD Processors Transform Mainstream Data Storage |author= |date=2008-09-08 |work= |publisher=SandForce |accessdate=2010-05-31}}</ref> Before this announcement, a write amplification of 1.0 was considered the lowest that could be attained with an SSD.<ref name="Conley" /> Currently, only [[SandForce]] employs compression in its SSD controller.
-->
== Funzionamento elementare di un SSD ==
{{See also|Memoria flash|Unità a stato solido|Flash file system}}
A causa della particolare tipologia di operazioni eseguibili in [[memoria flash]], le informazioni non possono essere sovrascritte direttamente come nel caso di un [[disco rigido]]. Quando le informazioni sono scritte per prime in un'[[Unit%C3%A0_a_stato_solido|unità stato solido]], tutte le [[Memoria_flash#Il_transistor_floating_gate|celle]] sono inizializzate in modo tale che vi si possa scrivere direttamente in pagine (solitamente di grandezza compresa intorno a 4-8 [[kilobytes]] ciascuna). Il [[Unit%C3%A0_a_stato_solido#Controller|controller]] dell'unità, che si occupa di gestire la memoria flash e le [[Interfaccia_(informatica)|interfacce]] con l'host, utilizza una mappatura da logico-a-fisico conosciuta come ''LBA'' o [[logical block addressing]], che è parte dell'FTL, o ''[[flash translation layer]]'', ovvero il livello di traduzione di un [[flash file system|flash file system]].<ref>{{Cita web|url=http://domino.watson.ibm.com/library/cyberdig.nsf/papers/50A84DF88D540735852576F5004C2558/$File/rz3771.pdf |titolo=The Fundamental Limit of Flash Random Write Performance: Understanding, Analysis and Performance Modelling |autore1=Hu, X.-Y. |autore2=R. Haas |lastauthoramp=yes |editore=IBM Research, Zurich |data=31 marzo 2010 |accesso=19 giugno 2010}}</ref>
Quando le nuove informazioni arrivano a sostituire quelle vecchie, il controller dell'SSD scrive i nuovi dati in una nuova posizione e aggiorna la mappatura logica di conseguenza, riferendola alla nuova locazione fisica. Per questa ragione le informazioni nella vecchia posizione non sono più valide e hanno bisogno di essere cancellate prima che loro locazione fisica possa essere riutilizzata.<ref name="IBM_WA" /><ref name="IBM_Perf">{{Citacite web |url=http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.141.1709 |titolotitle=Design Tradeoffs for SSD Performance |autoreauthor=Agrawal, N., V. Prabhakaran, T. Wobber, J. D. Davis, M. Manasse, R. Panigrahy |datadate= June 2008 |editorework= |publisher=[[Microsoft]] | citeseerx = 10.1.1.141.1709|accessoaccessdate=2 giugno 2010-06-02}}</ref>
La memoria flash può essere programmata e cancellata solo per un numero limitato di volte. Questo valore è solitamente indicato come numero massimo di cicli di [[Memoria_flash#Programmazione_e_cancellazione|programmazione e cancellazione]], o ''P/E''. Ad esempio una memoria flash SLC, o ''single-level cell'', progettata per alte prestazioni e grande longevità può tipicamente operare tra i 50,000 e i 100,000 cicli. Una memoria flash MLC, o ''multi-level cell'', è invece progettata per applicazioni di costo inferiore ed ha una quantità notevolmente ridotta di cicli massimi che si attestava, nel 2011, tra i 3,000 e i 5,000 cicli. Dal [[2013]], le memorie flash TLC, o ''triple-level cell'', sono state rese disponibili con un numero di cicli che è sceso fino a 1,000 P/E. Si noti che è auspicabile la più bassa write amplification possibile, poiché corrisponde ad un numero inferiore di cicli di programmazione e cancellazione sulla memoria flash, permettendo ad un SSD di durare più a lungo.<ref name="IBM_WA" />
== Calcolo del valore ==
La memoria flash può essere programmata e cancellata solo per un numero limitato di volte. Questo valore è solitamente indicato come numero massimo di cicli di [[Memoria_flash#Programmazione_e_cancellazione|programmazione e cancellazione]], o ''P/E''. Ad esempio una memoria flash SLC, o ''single-level cell'', progettata per alte prestazioni e grande longevità può tipicamente operare tra i 50,000 e i 100,000 cicli. Una memoria flash MLC, o ''multi-level cell'', è invece progettata per applicazioni di costo inferiore ed ha una quantità notevolmente ridotta di cicli massimi che si attestava, nel 2011, tra i 3,000 e i 5,000 cicli. Dal [[2013]], le memorie flash TLC, o ''triple-level cell'', sono state rese disponibili con un numero di cicli che è sceso fino a 1,000 P/E. Si noti che è auspicabile la più bassa write amplification possibile, poiché corrisponde ad un numero inferiore di cicli di programmazione e cancellazione sulla memoria flash, permettendo ad un SSD di durare più a lungo.<ref name="IBM_WA" />
La write amplification è stata presente negli SSD anche prima che il termine fosse coniato, ma è stato nel 2008 che sia [[Intel]] che la SiliconSystems iniziarono ad usare il termine nelle loro pubblicazioni.<ref name="Zsolt_Silicon_Systems" />
Una semplice formula per calcolare il valore della write amplification di un SSD è:<ref name="IBM_WA" /><ref name="OCZ_WA">{{cite web |url=http://www.oczenterprise.com/whitepapers/ssds-write-amplification-trim-and-gc.pdf |title=SSDs - Write Amplification, TRIM and GC |publisher=OCZ Technology |accessdate=2012-11-13}}</ref> All SSDs include some level of garbage collection, but they may differ in when and how fast they perform the process.<ref name="OCZ_WA" /> Garbage collection is a big part of write amplification on the SSD.<ref name="IBM_WA" /><ref name="OCZ_WA" /><ref>{{cite web |url=http://www.intel.com/cd/channel/reseller/asmo-na/eng/products/nand/feature/index.htm |title=Intel Solid State Drives |date= |work= |publisher=Intel |accessdate=2010-05-31}}</ref>
: <math>\frac{\text{informazioni scritte in memoria flash}}{\text{informazioni scritte dal sistema}} = \text{write amplification}</math>
== Note ==
|