Why IOPS Don’t Matter

The commonly accepted measure of performance for storage systems has long been IOPS, but this isn’t the best measure. Find out what is.

Why IOPS Don't Matter

image_pdfimage_print

This blog on Why IOPS Don’t Matter was originally published in 2014. Some figures may have changed.

The commonly accepted measure of performance for storage systems has long been IOPS (input/output operations per second). The first storage system I ever installed at a customer site had controllers rated for 20K IOPS and we all thought it was awesome.

Over the years, I’ve developed an extensive knowledge of storage IO benchmarking tools and have demonstrated hero IOPS numbers many times on a number of different storage systems.

Today, I’ve learned that IOPS don’t matter.

Who Needs All Those IOPS Anyway?

Most systems easily deliver over 100K IOPS, and some vendors tout figures in the millions. For instance, Pure Storage markets its FA-400 systems with a capacity of 400K IOPS.

However, do customers actually need this level of performance? In practice, very few do. In the enterprise storage sector, particularly within Tier 1 environments supporting a mix of applications and servers, it’s rare to find clients requiring even 100K IOPS. Historically, when customers are asked about their peak IOPS, the responses typically range from 30K to 40K, with exceptions primarily in VDI setups or certain database workloads. Based on this, it’s a question no longer worth asking as it’s clear Pure Storage’s systems exceed the performance needs of 99% of users.

How Do You Measure IOPS?

To evaluate a storage system’s performance, standard benchmarking tools like Iometer or Vdbench are commonly used to measure IOPS across different IO profiles. However, these profiles often rely on outdated assumptions.

Most benchmarks focus on smaller block sizes (like 4KB or 8KB), while real-world average block sizes in mixed workload environments tend to be larger, typically around 32KB to 64KB. Less than 5% of systems report an average block size under 10KB, with only about 15% below 20KB. Additionally, single applications frequently generate diverse IO patterns; even a solitary database instance will utilize varying IO types for different components (such as data files, logs, and indexes).

Consequently, synthetic benchmarks may provide a theoretical number, but this figure often fails to reflect actual performance in customer environments.

And what about latency?

Can you use the latency measured with these benchmark tools to evaluate and compare storage systems? Not really.

Even if we ignore the fact that these tools tend to measure average latencies and miss outliers (one single IO taking longer than the other ones in the same transaction can slow down the whole transaction), the latency of an IO will vary depending on the block size. Since the block size isn’t realistic for IOPS benchmarking, the latency measured during these benchmarks is also pretty much useless.

While IOPS undoubtedly offers valuable insights into how quickly a storage device can handle read and write operations, it doesn’t provide a holistic view of real-world performance. The obsession with achieving higher IOPS figures can lead to a tunnel vision that overlooks other crucial factors such as latency, throughput, and overall system architecture. Modern applications and workloads are becoming increasingly complex, demanding a more nuanced approach to evaluating storage performance beyond a single metric.

So if neither IOPS nor latency are a good measure of the performance of a storage system, what is then?

Hacker's Guide to Ransomware Mitigation and Recovery

Run the app, not a benchmark tool

The only real way to understand how fast an application will run on a given storage system is to run the application on this storage system. Period.

When you run a synthetic benchmark tool such as Iometer, the only application you’ll measure is Iometer.

Ideally, move your production applications to the storage system you’re evaluating. If you can’t move the app, move a copy of this app or the test/dev server/instances, and run the exact same tasks your users would run on your production app.

Then, measure how this app behaves with your real data and your real workload.

Measure application metrics, not storage metrics

What’s the point of measuring IOPS and latency anyway? After all, these are metrics that are relevant only to the storage admin.

Will your application owner and end users understand what IOPS means to them? Does your CIO care about storage latency?

No. Outside of the storage team, these metrics are useless; the real metrics that application owners and users care about are metrics that relate to these apps. It’s application and user metrics that should be measured.

  • How long does this critical daily task take to execute in the application?
  • How fast can your BI make data available to decision makers?
  • How often can you refresh the test and dev instances from the production database?
  • How long does it take to provision all of these virtual servers the dev team needs every day?
  • How many users can you run concurrently without them complaining about performance issues?
  • How quickly can this OLAP cube be rebuilt? Can it now be rebuilt every day instead of every week?

Take the time to test properly and measure what really matters

Testing a storage system in your environment with your applications is the only responsible way of evaluating it. Don’t just believe spec sheets or vendor claims. Test a real system in a proof of concept, in your environment, with your data.

But just as important is measuring the correct metrics. At the end of the day, a storage system’s job is to serve data to applications. It’s the impact on these applications that should be measured.

If you want to evaluate a great all-flash array, contact your Pure representative today.

We’d love to show you that our systems do what we say they do and to work with you to understand what really matters to your users, application owners, and ultimately, your business.

For years, IOPS has been considered a key metric in evaluating the efficiency and speed of storage systems. However, it’s time to challenge the conventional wisdom and question whether IOPS is truly the be-all and end-all indicator of storage performance.

In the pursuit of optimal storage solutions, it’s essential to recognize that IOPS alone may not accurately represent the user experience or efficiency of a system. Factors like response time and overall throughput are equally critical, if not more so, in determining how well a storage system can handle diverse workloads in real-world scenarios.

Looking ahead, the future of storage undoubtedly lies in the realm of flash storage technology. Flash storage, characterized by its speed, reliability, and low latency, has revolutionized the way data is stored and accessed. Unlike traditional hard disk drives (HDDs), flash storage relies on solid-state technology, eliminating the mechanical components that can lead to slower access times and higher failure rates. As the cost of flash storage continues to decline, it becomes increasingly evident that the days of HDDs are numbered.

While HDDs have served as the backbone of data storage for decades, their limitations are becoming more apparent in today’s data-driven world. The inherent mechanical nature of HDDs introduces points of failure, increased power consumption, and slower read/write speeds compared to their flash counterparts. As technology evolves, the advantages of flash storage in terms of speed, efficiency, and durability make it the clear frontrunner in the storage race.

In conclusion, IOPS, once hailed as the ultimate metric in storage performance, should not be the sole determinant when evaluating storage solutions. A comprehensive understanding of latency, throughput, and overall system architecture is crucial for making informed decisions. As we navigate the ever-changing landscape of technology, it’s evident that flash storage is not just a trend but the future of efficient and high-performance storage, leaving traditional hard disk storage in its wake.i

Written By: