Recently I shared how EMC’s entry into the All-Flash Array market provided both market clarity and likely would accelerate the adoption of All-Flash Arrays (AFAs). Most found the post accurate and a fair representation of how the market is divided between performance-focused and storage-efficient AFA architectures.
The performance capabilities and datacenter/environmental benefits of All-Flash Arrays are widely understood. As such I felt it appropriate to classify and clarify some of the key differences between performance-focused and storage-efficient AFA architectures. My efforts are reflected in the chart below.
(Replication for Pure Storage FlashArrays will be available in beta for customers on Purity O.E. 3.3)
With the ability to compare and contrast the AFA market it is easy to identify the performance-focused and storage-efficient platforms. What may be more interesting are the areas where these platforms both align and diverge in capabilities. For example…
- MLC-based NAND is used in all of the platforms.
- As one would expect, the storage-efficient arrays provide significant increases in usable storage capacity.
- Most of the AFAs are locked hardware configurations and only support single SSD failures.
- There’s little market consistency in areas of data and operational management features.
- Half of the systems require external infrastructure elements like UPS and InfiniBand networks.
I would like to be very clear – this is my attempt to have a substantive conversation around the capabilities inherent in a number of All-Flash Arrays. I do not claim to be an expert on any AFA outside of the Pure Storage FlashArray. The information in this chart was obtained via publicly available content provided by Violin Memory, IBM, EMC and Pure Storage. I will update and/or correct any misinformation as long as the revised data was produced by the AFA vendor and can be publicly referenced.
Note: I had to guestimate as to what is possible with each array. Admittedly this is likely less than what any vendor would prefer to see published. My apologies.
The post appeared first on The Virtual Storage Guy.