This post was originally published on this siteStorage capacity reporting seems like a pretty straight forward topic. How much storage am I using? But when you introduce the ...
Many storage users are familiar with ALUA, or Asymmetric Logical Unit Access. This describes storage where some paths don’t work at all or give lower performance, because of standby controllers, volumes associated with a controller, or other architectural reasons. The Pure Storage FlashArray provides symmetric access to storage — any IO to any volume on any port always gets the same performance.
The FlashArray has two controllers for high availability, and providing fast access to the same storage through both controllers requires a high-performance connection between the controllers. Since the array can serve multiple gigabytes per second to initiators, the back-end interconnect similarly needs very high throughput to keep up. And because the array needs to handle hundreds of thousands of IOs per second with latency less than 1 millisecond for every one, the back-end also needs to handle high message rates with low latency.
In fact, this back-end interconnect is one area where we’ve made some interesting improvements in FlashArray//m. To understand this, let’s first look at the back-end interrconnect in the familiar FA-400 series arrays. The controllers are separate boxes, which talk to each other over InfiniBand, something like this:
InfiniBand has three important characteristics that make it great as a back-end connection between storage controllers:
In the FlashArray//m, the controllers share an enclosure and talk to each other over a passive midplane. We’ve simplified the architecture to look like this:
We don’t use InfiniBand adapters, and instead connect the PCI Express ports on our processors directly, using a feature called Non-Transparent Bridging, or NTB for short. NTB lets each controller expose a subset of its memory to the other controller efficiently.
The hardware is simpler, but we maintained and improved the same three key characteristics of our interconnect:
Connecting controllers natively via PCI Express is just one way that the FlashArray//m hardware was purpose built to enhance and extend the architecture of the FlashArray. Of course we support a complete non-disruptive upgrade from existing systems to FlashArray//m by temporarily using an InfiniBand connection to the old controllers and seamlessly switching over to PCI Express between the new FlashArray//m controllers.