Post-Trade Processing Modernization – Fast tracking the benefits of Analytics & AI while driving huge operational efficiencies Is there a financial services organization in the world who...
The launch of DirectFlash™ Fabric is an exciting milestone in Pure’s journey with NVMe. At Pure we recognized NVMe’s importance in unleashing the full potential of flash, and designed support into the first FlashArray//M. FlashArray//X and DirectFlash technology proved the revolutionary advantages of NVMe over SAS and SATA SSDs.
As I explained in this blog post, the clear next step is to allow servers to use native NVMe over Fabrics (NVMe-oF) to access FlashArray™, driving even more storage performance and efficiency. With Purity 5.2, NVMe-oF over RoCE is available today on FlashArray//X. We’ve brought FlashArray to customers who are using cloud architectures to build SaaS or other online applications. Previously, these applications might have been forced to use direct-attached storage (DAS) because:
Unfortunately, using DAS means giving up the benefits of external shared storage:
25G and 100G Ethernet switches along with a crop of NICs from Broadcom, Cisco, Marvell and Mellanox that all support hardware-offloaded RDMA over Converged Ethernet (RoCE) have come together to enable high-performance, enterprise-class storage on Ethernet fabrics. NVMe-oF runs efficiently on RoCE, and brings the proven performance of NVMe to a standard that the storage industry can agree on. We’ve seen incredible performance advantages for applications when comparing NVMe/RoCE to iSCSI using the same fabric. For the applications we’ve tested, we see comparable performance to DAS, but with enterprise data services!
The applications we’re accelerating are usually built on Linux, with both NoSQL and SQL databases. The NVMe standards body has collaborated effectively with the Linux community, bringing open-source driver support to the Linux kernel on the same day that the standard was released (with a little hardening help from Pure, for example this Linux kernel commit).
I have worked on the Linux RDMA stack since before git was invented (this is one the earliest commits I could find, with an amusing changelog) and I’ve enjoyed seeing the foundations we laid enable the cutting-edge technology of NVMe/RoCE to get Linux support so quickly. Even enterprise Linux distributions including Red Hat Enterprise Linux and SuSE Linux Enterprise Server include the NVMe/RoCE drivers.
NVMe-oF supports multiple transports – in addition to RoCE, there are standards for NVMe over both Fibre Channel and TCP. I’m often asked why Pure chose to implement NVMe/RoCE. It’s important to say that Pure chose to implement NVMe/RoCE first – we definitely plan to implement NVMe/FC and NVMe/TCP in the future. We chose to focus our efforts on delivering our RoCE product first because it allows FlashArray to solve problems for customers who otherwise couldn’t consider FlashArray. It’s also the protocol where operating systems and applications are ready now.
However, we’re definitely not trying to declare “Fibre Channel is dead.” The differences between NVMe/FC and SCSI-FCP are not as extreme as between NVMe/RoCE and iSCSI, because FC HBAs already offload the data movement for SCSI-FCP, but NVMe still has the advantages of a lighter stack and multiple queues. Still, Fibre Channel is a more conservative market, with requirements for broad application support and support for operating systems beyond Linux. For example, VMware has not released NVMe-oF drivers yet. (Although we did demonstrate NVMe/RoCE with VMware at VMworld in 2018)
For anyone who is looking for Fibre Channel storage – I firmly believe FlashArray//X is the best solution available, and I want to make sure it stays that way. The FlashArray//X hardware we’re shipping now is ready for NVMe/FC (and in fact we’re running NVMe/FC prototypes in our engineering lab). When Purity is released with support for NVMe/FC, getting this support will just require a software upgrade. While we believe it is early for NVMe/FC, we are hard at work on NVMe/FC and I’m really looking forward to writing a blog post telling you more about it.