Browsing by Subject "Distributed Storage"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Towards Performance and Reliability Enhancements of Enterprise Data Storage Systems(2018-08) Haghdoost, AlirezaEnterprise data storage arrays are on the verge of transitioning to ultra-fast storage technologies like Storage Class Memories (SCM) and NVMe interfaces This transition demands simple and low-overhead softwares to drive and benchmark the new hardwares without sacrificing storage performance and reliability. In this dissertation, three methods have been proposed to enhance performance and reliability models for enterprise storage systems. First, we introduce new methods to replay intensive block I/O workloads more accurately. These methods can be used to reproduce realistic workloads for benchmarking of a high-performance block storage device/system. Reproduction of such an intensive workload with high accuracy without addressing the intrinsic performance variations in the I/O stack and considering the dependencies between block I/O requests is a great challenge. The I/O stack performance variations prevent to reproduce the captured workload on the same storage device accurately. We study the root cause of these performance variations in the Linux I/O stack and propose a high-fidelity replay method that replay an I/O workload with more accuracy on a similar (unscaled) storage device. Moreover, we have proposed a scalable replay method that can infer I/O dependencies and correctly propagating IO related performance gains along dependency chains during a workload replay on a faster (scaled) storage device. We evaluate our replay methods with multi-dimensional accuracy metrics and verify that it reproduces realistic I/O workloads with better accuracy compared to other replay tools. Second, we introduce TxRAID, a transactional crash recovery method for all-flash RAID arrays that can close the write-hole gap with a negligible overhead and prevent silent data corruptions after a crash. TxRAID can guarantee write atomicity without a non-volatile memory or extra journaling layer and simplifies the I/O stack in software and hardware RAID arrays. We propose limited enhancements in RAID, SSD firmware and SCSI interface to coordinate the crash recovery of individual drives with each other. During the recovery process, TxRAID rolls-back partially-written transactions to their last consistent state by leveraging the invalid data pages that co-exist on the SSDs. We have developed a trace-driven simulator to evaluate TxRAID with a wide range of synthetic and realistic workloads and compare it with existing recovery methods in the Linux kernel. TxRAID can plug the write-hole gap with a negligible impact on the I/O performance. Finally, we propose the Offline Write Buffer Policy (OWBP) which can be used for the performance evaluation of the write buffer in SSDs. Since flash memory can tolerate limited erase cycles until it wears-out, a write buffer is usually used on the top of flash memory in SSDs to coalesce write requests and reduce erase operations. OWBP can estimate a lower bound for these erase operation counts and determines how much further a practical (online) write-buffer replacement policy can reduce the flash write traffic.