File system fragmentation

This is an old revision of this page, as edited by Intgr (talk | contribs) at 10:25, 14 December 2006 (<!-- TODO: "Why fragmentation occurs" -->). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computing, file system fragmentation, sometimes called file system aging, is the inability of a file system to lay out related data sequentially (contiguously), an inherent phenomenon in storage-backed file systems that allow in-place modification of their contents. It is a special case of data fragmentation. File system fragmentation introduces disk head seeks which are known to hinder throughput.

Why fragmentation matters

File system fragmentation is projected to become more problematic with newer hardware due to the increasing disparity between sequential access speed and rotational delay (and to a lesser extent seek time), of consumer-grade hard disks,[1] which file systems are usually placed on. Thus, fragmentation is a important problem in recent file system research and design. The containment of fragmentation not only depends on the on-disk format of the file system, but also heavily on its implementation.[2]

In simple file system benchmarks, the fragmentation factor is often omitted, as realistic aging and fragmentation is difficult to model. Rather, for simplicity of comparison, file system benchmarks are often run on empty file systems, and unsurprisingly, the results may vary heavily from real-life access patterns.[3]

Types of fragmentation

File system fragmentation may occur on several levels:

  • Fragmentation within individual files and their metadata.
  • Free space fragmentation, making it increasingly difficult to lay out new files contiguously.
  • The decrease of locality of reference between separate, but related files.

File fragmentation

Individual file fragmentation occurs when a single file has been broken into multiple pieces (called extents on extent-based file systems). While disk file systems attempt to keep individual files contiguous, this is not often possible without significant performance penalties.

Free space fragmentation

Free (unallocated) space fragmentation occurs when there are several unused areas of the file system where new files or metadata can be written to. Unwanted free space fragmentation is generally caused by deletion or truncation of files, but file systems may also intentionally insert fragments ("bubbles") of free space in order to facilitate extending nearby files (see proactive techniques below).

Related file fragmentation refers to the lack of locality of reference between related files. Unlike the previous two types of fragmentation, related file fragmentation is a much more vague concept, as it heavily depends on the access pattern of specific applications. This also makes objectively measuring or estimating it very difficult. However, arguably, it is the most critical type of fragmentation, as studies have found that the most frequently accessed files tend to be small compared to available disk throughput per second.[4]

Techniques for mitigating fragmentation

Several techniques have been developed to fight fragmentation. They can usually be classified into two categories: proactive and retroactive. Due to the hard predictability of access patterns, these techniques are most often heuristic in nature, and may degrade performance under unexpected workloads.

Proactive techniques

Proactive techniques attempt to keep fragmentation at a minimum at the time data is being written on the disk. The simplest of such is, perhaps, appending data to an existing fragment in place where possible, instead of allocating new blocks to a new fragment.

Most of today's file systems attempt to preallocate longer chunks, or chunks from different free space fragments, to files that are actively appended to. This mainly avoids file fragmentation when several files are concurrently being appended to, thus avoiding them from becoming excessively intertwined.[2]

A relatively recent technique is delayed allocation in XFS and ZFS; the same technique is also called allocate-on-flush in reiser4 and ext4. This means that when the file system is being written to, file system blocks are reserved, but the locations of specific files are not laid down yet. Later, when the file system is forced to flush changes as a result of memory pressure or a transaction commit, the allocator will have much better knowledge of the files' characteristics. Most file systems with this approach try to flush files in a single directory contiguously. Assuming that multiple reads from a single directory are common, locality of reference is improved.[5] Reiser4 also orders the layout of files according to the directory hash table, so that when files are being accessed in the natural file system order (as dictated by readdir), they are always read sequentially.[6]

Retroactive techniques

Retroactive techniques attempt to reduce fragmentation, or the negative effects of fragmentation, after it has occurred. Many file systems provide defragmentation tools, which attempt to reorder fragments of files, and often also increase locality of reference by keeping smaller files in directories, or directory trees, close to each another on the disk. Some file systems, such as HFS Plus, utilize idle time to defragment data on the disk in the background.

See also

References

  1. ^ Dr. Mark H. Kryder (2006-04-03). "Future Storage Technologies: A Look Beyond the Horizon" (PDF). Storage Networking World conference. Seagate Technology. Retrieved 2006-12-14. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)
  2. ^ a b L. W. McVoy, S. R. Kleiman (1991 winter). "Extent-like Performance from a UNIX File System" (PostScript). Proceedings of USENIX winter '91. Dallas, Texas: Sun Microsystems, Inc. pp. pages 33–43. Retrieved 2006-12-14. {{cite conference}}: |pages= has extra text (help); Check date values in: |date= (help); Unknown parameter |booktitle= ignored (|book-title= suggested) (help)
  3. ^ Keith Arnold Smith (2001-01). "Workload-Specific File System Benchmarks" (PDF). Harvard University. Retrieved 2006-12-14. {{cite journal}}: Check date values in: |date= (help); Cite journal requires |journal= (help)
  4. ^ John R. Douceur, William J. Bolosky (1999-06). "A Large-Scale Study of File-System Contents" (PDF). ACM SIGMETRICS Performance Evaluation Review. volume 27 (issue 1). Microsoft Research: pages 59–70. ISSN 0163-5999. Retrieved 2006-12-14. {{cite journal}}: |issue= has extra text (help); |pages= has extra text (help); |volume= has extra text (help); Check date values in: |date= (help)
  5. ^ Adam Sweeney, Doug Doucette, Wei Hu, Curtis Anderson, Mike Nishimoto, Geoff Peck (1996-01). "Scalability in the XFS File System" (PDF). Proceedings of the USENIX 1996 Annual Technical Conference. San Diego, California: Silicon Graphics. Retrieved 2006-12-14. {{cite conference}}: Check date values in: |date= (help); Unknown parameter |booktitle= ignored (|book-title= suggested) (help)CS1 maint: multiple names: authors list (link)
  6. ^ Hans Reiser (2006-02-06). "The Reiser4 Filesystem" (Google Video). A lecture given by the author, Hans Reiser. Retrieved 2006-12-14.