This article is actively undergoing a major edit for a little while. To help avoid edit conflicts, please do not edit this page while this message is displayed. This page was last edited at 06:25, 14 December 2006 (UTC) (18 years ago) – this estimate is cached, . Please remove this template if this page hasn't been edited for a significant time. If you are the editor who added this template, please be sure to remove it or replace it with {{Under construction}} between editing sessions. |
In computing, file system fragmentation, sometimes called file system aging is the inability of a file system to lay out related data sequentially (contiguously), an inherent phenomena in storage-backed file systems that allow in-place modification of their contents. It is a special case of data fragmentation.
File system fragmentation is projected to become more problematic with time, due to the increasing disparity of sequential access speed and rotational delay (and to a lesser extent seek time), of consumer-grade hard disks,[1] which file systems are usually placed on.
File system fragmentation may occur on several levels:
- Fragmentation within individual files.
- Free space fragmentation, making it increasingly difficult to lay out new files contiguously.
- The decrease of locality of reference between separate, but related files.
Types of fragmentation
File fragmentation
Individual file fragmentation occurs when a single file has been broken into multiple pieces (called extents on extent-based file systems). While disk file systems attempt to keep individual files contiguous, this is not often possible without significant performance penalties. File fragmentation
Free space fragmentation
Free (unallocated) space fragmentation occurs when there are several unused areas of the file system where new files or metadata can be written to. Unwanted free space fragmentation is generally caused by deletion or truncation of files, but file systems may also intentionally insert fragments (sometimes called "bubbles") of free space in order to facilitate extending nearby files (see proactive techniques below).
Techniques for mitigating fragmentation
Several techniques have been developed to fight fragmentation. These can usually be classified into two categories: proactive and retroactive.
Proactive techniques
Proactive techniques attempt to keep fragmentation at a minimum at the time data is being written on the disk. The simplest of such is, perhaps, appending data to an existing fragment in place where possible, instead of allocating new blocks to a new fragment.
Retroactive techniques
Retroactive techniques attempt to reduce fragmentation after it has occurred. Many file systems provide defragmentation tools, which attempt to reorder fragments of files, and often also increase locality of reference by keeping smaller files in directories, or directory trees, close to each another on the disk. Some file systems, such as HFS Plus, exploit idle time to defragment data on the disk in the background.
See also
References
- ^ Dr. Mark H. Kryder (2006-04-03). "Future Storage Technologies: A Look Beyond the Horizon" (PDF). Storage Networking World conference. Seagate Technology. Retrieved 2006-12-14.
{{cite conference}}
: Unknown parameter|booktitle=
ignored (|book-title=
suggested) (help)
- Keith Arnold Smith (2001-01). "Workload-Specific File System Benchmarks" (PDF). Harvard University. Retrieved 2006-12-14.
{{cite journal}}
: Check date values in:|date=
(help); Cite journal requires|journal=
(help)