File defragmentation scheme for a log-structured file system

Jonggyu Park, Dong Hyun Kang, Young Ik Eom

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

15 Scopus citations

Abstract

In recent years, many researchers have focused on log-structured file systems (LFS), because it gracefully enhances the random write performance and efficiently resolves the consistency issue. However, the write policy of LFS can cause a file fragmentation problem, which degrades sequential read performance of the file system. In this paper, we analyze the relationship between file fragmentation and the sequential read performance, considering the characteristics of underlying storage devices. We also propose a novel file defragmentation scheme on LFS to effectively address the file fragmentation problem. Our scheme reorders the valid data blocks belonging to a victim segment based on the inode numbers during the cleaning process of LFS. In our experiments, our scheme eliminates file fragmentation by up to 98.5% when compared with traditional LFS.

Original languageEnglish
Title of host publicationProceedings of the 7th ACM SIGOPS Asia-Pacific Workshop on Systems, APSys 2016
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9781450342650
DOIs
StatePublished - 4 Aug 2016
Event7th ACM SIGOPS Asia-Pacific Workshop on Systems, APSys 2016 - Hong Kong, China
Duration: 4 Aug 20165 Aug 2016

Publication series

NameProceedings of the 7th ACM SIGOPS Asia-Pacific Workshop on Systems, APSys 2016

Conference

Conference7th ACM SIGOPS Asia-Pacific Workshop on Systems, APSys 2016
Country/TerritoryChina
CityHong Kong
Period4/08/165/08/16

Keywords

  • Cleaning
  • File defragmentation
  • Log-structured file systems

Fingerprint

Dive into the research topics of 'File defragmentation scheme for a log-structured file system'. Together they form a unique fingerprint.

Cite this