This design implements new constructs in areas of failure domains, stateless storage controllers, and data protection that are a significant advancement over traditional storage architectures.

Availability in Traditional Storage Arrays

Traditional active/active and scale-out storage array architectures are able to access all CPU, memory and disk/flash resources in order to provide maximum performance. This model is common throughout the storage industry and supports storage operations well under normal conditions. Challenges can arise with planned and unplanned events as the failure domains for these models are designed for availability with performance loss. These systems ‘share the fate’ of running states when hardware is offline. In failure scenarios storage performance is negatively impacted as available resources are tasked with serving a workload that is greater than their ability to deliver at normal levels due to the reduced hardware capacity.

Lose a storage controller and an array is actually losing a percentage of CPU, memory, IO ports and the flash write buffer, which is often battery-backed NVRAM or UPS-backed DRAM. Lose a drive from a RAID set or a plex in a replica, and available IOPS are lost to IO intensive data rebuilds or large capacity volume replica operations.

Both of these conditions can have a significant impact on performance. The impact of a controller is measured as the number of failures within the availability fault domain. For example, lose 1 controller in an active/active array could suffer a 50% performance loss. Lose 1 controller in a 2-node scale-out array (comprised of HA nodes) and the performance loss could peak at 25%.

The Modern Reality of Performance Loss

Providing hundreds of thousands of IOPS is an order of magnitude greater responsibility than ten of thousands of IOPS. Say your traditional array provides 30,000 IOPS at less than 15ms of latency. What’s the impact on users of any application that must operate during a planned maintenance window where the storage performance drops to 15,000 IOPS at 30ms?

This is a rhetorical question. Evening and weekend maintenance windows exist exclusively to mitigate the impact of reduced levels of service. With this said, maintenance windows are facing an extinction event as IT departments support a greater number of applications and services that have an always-available requirement. This shift in enterprise IT is the direct byproduct of our increasing expectations as consumers, as over the years we have come to demand an always-available online experience.

Reinventing Storage Availability

When you migrate an application or consolidate a set of applications onto an all-flash storage array, there is no room for a drop in service level. When servicing hundreds of thousands of IOs with sub-millisecond latency, a 25% or 50% loss in performance could be catastrophic to a business.

What good is the performance of an all-flash storage array if the vendor cannot ensure its performance is delivered 100% of the time? It’s not the ratio of decrease but rather the magnitude of the loss when hundreds of thousands of IOPS are no longer available for a set of applications that require them to meet a committed service level.

The engineering team at Pure Storage designed an all-flash array storage architecture with multiple failure domains that ensure a uniform performance level in 100% of operating conditions. The key elements that enable this capability include…

1. The Foundation: A Stateless Architecture

The Pure Storage FlashArray builds upon the common two-node controller model with one significant architectural difference. The NVRAM used to acknowledge writes with low latency and to enable services like adaptive data reduction is located in in the first two storage shelves and not in the storage controllers. By relocating NVRAM to the persistent storage layer (the Flash SSD), Pure Storage engineers have created a stateless storage controller architecture.

With storage controllers designed to solely provide CPU, memory and IO ports the system is equipped with a set of non-disruptive storage operations that support SSD capacity expansion, the upgrade of the Purity O.E. software, as well as the upgrade of the FlashArray storage controllers from one generation to the next.

2. Ensure CPU & Memory Resource Availability

During normal operating conditions, the FlashArray limits CPU and memory resource utilization to that of at 50% of the physical CPU and memory capacity. The architecture is rather elegant in its simplicity. During normal operations both controllers provide symmetric active/active access to hosts, yet only one controller is processing I/O from the SSDs. The controller not accessing the SSDs acts as an IO passthrough for the FC and iSCSI IO ports it owns.

During system maintenance or in the event of a controller failure the FlashArray will operate with 100% of the CPU and memory resources of the available controller. This provides the same level of controller resources to service IO as those available during normal operating conditions. This design provides a failure domain that avoids the ‘availability at a loss’ of shared fate storage architectures, ensuring consistent delivery of flash storage performance during normal, maintenance and failure operating conditions.

In addition, Purity adapts to the workload conditions. Should the load spike to a point where the FlashArray requires 100% of the CPU and memory resources then Purity will prioritize the serving of IO over secondary processes like redundant elements within the data reduction engine. I cover this topic and the reasoning for the design decision in greater depth in the post “Adaptive Data Reduction”.

To ensure customer experiences these settings are non-configurable. They are enabled in all performance benchmarks and customer deployments. By ensuring the same level of CPU and memory resources under all conditions Pure Storage can provide 100% IO throughput and latency, and can truly deliver on the non-disruptive capabilities provided by the stateless architecture.

3. Zero-Impact RAID Rebuilds

CPU & memory aren’t the only components that can impact performance. While Flash is exponentially more performant than disk, it can still impact performance in failure scenarios.

Our engineers developed RAID-HA to provide data protection in a FlashArray. RAID-HA is an adaptive form of RAID that provides a minimum of dual parity protection, self heals when a SSD has failed or is removed, automatically increases the parity protection on highly reduced data sets and performs priority based RAID reconstructions that begin with the least protected data residing in the RAID set.

Single parity RAID technologies expose datasets to data loss during failure and rebuild events. This risk forces arrays with these implementations to prioritize array resources on the reconstruction of data over the serving of IO. This antiquated design can impact performance and exposes customers to complete loss of data should a bad cell be encountered during the rebuild process. The dual parity protection of RAID-HA ensures that data is protected during the duration of a failure . Again, Purity will adapt to workloads, prioritizing the serving of IO should load spike during the rebuild process.

I’ll cover more on data protection in the upcoming post, “RAID-HA: Adaptive Data Protection”

Together, the stateless controller architecture, the ability to ensure CPU and memory resources, and the zero-impact benefits of RAID-HA produce a non-disruptive storage architecture that enables customers to adopt new software capabilities, address bugs, and expand capacity and performance without scheduling maintenance windows.

Is the FlashArray an Active/Active or Active/Passive Architecture?

The Pure Storage FlashArray is a symmetric active/active array architecture that allows all FC and iSCSI IO ports to be accessed simultaneously, which provides for simplicity in storage networking operations like connectivity, multipathing and troubleshooting. With that said Pure Storage engineers govern CPU and memory resource utilization with a multitude of software and hardware mechanisms. While both controllers service host IO requests only one processes IO operations to the SSDs. At the end of the day this active/active architecture operates similarly to a traditional storage array under normal operations and is truly unique in failure and maintenance scenarios.

Active/Active or Active/Passive may be debatable; however, Pure has delivered a new form of failure domain. This is a better architectual design that provides Pure has delivered a better architectual design that provides non-disruptive storage operations that delivers consistent and predictable performance for planned and unplanned outages.

Enabling the Flash Datacenter


One of the advantages of developing a storage platform from the ground up includes hindsight; the ability to design an architecture optimized to meet the modern needs of an always-on operational model. Pure Storage has changed the concept of non-disruptive storage operations for all-flash storage with availability previously only seen with virtual infrastructures! Applications and their users never experience drops in performance and storage administrators rejoice in regaining their evenings and weekends back. Flash is changing the datacenter and I hope I have helped explain how Pure Storage is making this a reality.

The post appeared first on The Virtual Storage Guy.