Ganesan, Pradeep2013-09-112013-09-112013-05https://hdl.handle.net/11299/156620University of Minnesota M.S. thesis. May 2013. Major: Computer science. Advisor: David Hung-Chang Du. 1 computer file (PDF); vii, 41 pages.Data deduplication, an efficient technique to eliminate redundant bytes in the data to be stored, is largely used in data backup and disaster recovery. This elimination is achieved by chunking the data and identifying the duplicate chunks. Along with data reduction it also delivers commendable backup and restore speeds. While backup process pertains to write process, the restore process defines the read process of a dedupe system. With much emphasis and analysis being made to expedite the write process, the read performance of a dedupe system is still a slower process comparatively. This work proposes a method to improve the read performance by investigating the recently accessed chunks and their locality in the backup set (datastream). Based on this study of the distribution of chunks in the datastream, few chunks are identified that need to be accumulated and stored to serve the future read requests better. This identification and accumulation happen on cached chunks. By this a small degree of duplication of the deduplicated data is introduced, but by later caching them together during the restore of the same datastream, the read performance is improved. Finally the read performance results obtained through experiments with trace datasets are presented and analyzed to evaluate the design.en-USBackup storageData deduplicationRead cacheRead performanceRead performance enhancement in data deduplication for secondary storageThesis or Dissertation