Solid state storage devices have become widely available in recent years, and can replace
disk drives in many applications. While their performance continues to rise quickly,
prices of the NAND flash devices used to build them continue to fall. Flash-based SSDs
have been proposed for use in computing environments from high-performance server
systems to lightweight laptops.
High-performance SSDs can perform hundreds of thousands of I/O operations per
second. To achieve this performance, drives make use of parallelism and complex flash
management techniques to overcome flash device limitations. These characteristics
cause SSD performance to depart significantly from that of disk drives under some
workloads. This leads to opportunities and pitfalls both in performance and in benchmarking.
In this paper we discuss the ways in which high-performance SSDs are different
from consumer SSDs and from disk drives, and we set out guidelines for measuring
their performance based on worst-case workloads.
We use these measurements to evaluate improvements to Linux I/O driver architecture
for a prototype high-performance SSD. We demonstrate potential performance
improvements in I/O stack architecture and device interrupt handling, and discuss the
impact on other areas of Linux system design. As a result of these improvements we
are able to reach a significant milestone for single drive performance: over one million
random read IOPS with throughput of 1.4GBps.