欢迎来到kaffeel的博客

点滴积累,快乐分享-kaffeel.org
  首页  :: 新随笔  :: 订阅 订阅  :: 管理

Log-structured file system

Posted on 2012-07-19 16:17  kaffeel  阅读(481)  评论(0编辑  收藏  举报

A log-structured filesystem is a file system design first proposed in 1988 by John K. Ousterhout and Fred Douglis. Designed for high write throughput, all updates to data and metadata are written sequentially to a continuous stream, called a log. The design was first implemented by Ousterhout and Mendel Rosenblum.

Rationale

Conventional file systems tend to lay out files with great care for spatial locality and make in-place changes to their data structures in order to perform well on optical and magnetic disks, which tend to seek relatively slowly.

The design of log-structured file systems is based on the hypothesis(假设) that this will no longer be effective because ever-increasing memory sizes on modern computers would lead to I/O becoming write-heavy because reads would be almost always satisfied from memory cache. A log-structured file system thus treats its storage as a circular log(循环日志) and writes sequentially to the head of the log.

This has several important side effects:

  • Write throughput on optical and magnetic([mæg'netɪk] 有磁性的,磁的) disks is improved because they can be batched into large sequential runs and costly seeks are kept to a minimum.
  • Writes create multiple, chronologically-advancing versions of both file data and meta-data. Some implementations make these old file versions nameable and accessible, a feature sometimes called time-travel or snapshotting. This is very similar to a versioning file system.
  • Recovery from crashes is simpler. Upon its next mount, the file system does not need to walk all its data structures to fix any inconsistencies([ɪnkən'sɪst(ə)nsɪ] 不一致,易变), but can reconstruct its state from the last consistent point in the log.

Log-structured file systems, however, must reclaim free space from the tail of the log to prevent the file system from becoming full when the head of the log wraps around to meet it. The tail can release space and move forward by skipping over data for which newer versions exist farther ahead in the log. If there are no newer versions, then the data is moved and appended to the head.

To reduce the overhead incurred by this garbage collection, most implementations avoid purely circular logs and divide up their storage into segments. The head of the log simply advances into non-adjacent segments which are already free. If space is needed, the least-full segments are reclaimed first. This decreases the I/O load of the garbage collector, but becomes increasingly ineffective as the file system fills up and nears capacity.

Implementations

Some kinds of storage media, such as flash memory and CD-RW, slowly degrade as they are written to and have a limited number of erase/write cycles at any one location. Log-structured file systems are sometimes used on these media because they make fewer in-place writes and thus prolong the life of the device by wear leveling. The more common such file systems include:

  • UDF is a file system commonly used on optical discs.
  • JFFS and its successor JFFS2 are simple Linux file systems intended for raw flash-based devices.
  • UBIFS is a filesystem for raw NAND flash media and also intended to replace JFFS2.
  • LogFS is a scalable flash filesystem for Linux that works on both raw flash media and block devices, intended to replace JFFS2.
  • YAFFS is a raw NAND flash-specific file system for many operating systems (including Linux).

Disadvantages

The design rationale for log-structured file systems assumes that most reads will be optimized away by ever-enlarging memory caches. This assumption does not always hold:

  • On magnetic media—where seeks are relatively expensive—the log structure may actually make reads much slower, since it fragments files that conventional file systems normally keep contiguous with in-place writes.
  • On flash memory—where seek times are usually negligible—the log structure may not confer a worthwhile performance gain because write fragmentation has much less of an impact on write throughput. However many flash based devices cannot rewrite part of a block, and they must first perform a (slow) erase cycle of each block before being able to re-write, so by putting all the writes in one block, this can help performance as opposed to writes scattered into various blocks, each one of which must be copied into a buffer, erased, and written back.

See also

References

  1. ^ Rosenblum, Mendel and Ousterhout, John K. (June 1990) - "The LFS Storage Manager". Proceedings of the 1990 Summer Usenix. pp315-324.
  2. ^ Rosenblum, Mendel and Ousterhout, John K. (February 1992) - "The Design and Implementation of a Log-Structured File System". ACM Transactions on Computer Systems, Vol. 10 Issue 1. pp26-52.