NVMe-oF Support is now Released!

Today Pure Storage officially released support of NVMe-oF support on FlashArray, another important step forward with NVMe.


Prior to this, we have made a few incremental, but important improvements in the product to add full end-to-end support of NVMe:

  1. Converting our NVRAM devices to use NVMe
  2. Moving off of SSDs and adding custom NVMe-based flash modules in the FlashArray//x chassis
  3. Adding support for drive shelves that are connected via NVMe-oF
  4. Officially release NVMe-oF support for front-end workloads.

This has been a multi-step process that we spent a great deal of effort to make sure we were fully taking advantage of what NVMe in general has to offer.

First off, those new to the concept of NVMe, here are a few places to get some more info:

Craig Water‘s vBrownbag on NVMe:

Or some other useful documents:

https://nvmexpress.org/wp-content/uploads/NVMe_Over_Fabrics.pdf

https://www.purestorage.com/resources.htmltype-a/nvme-of-for-noobs.html

https://support.purestorage.com/FlashArray/FlashArray_Hardware/94_FlashArray_X/FlashArray_X_Product_Information/NVMe-oF_Overview

What we are initially supporting is RoCEv2 which stands for Remote Direct Memory Access over Converged Ethernet version 2. Which is a mouthful, but in short means we are supporting NVMe-oF over Ethernet. We are certainly planning to work on the other protocols, FC and TCP. But we led with RoCEv2. For more insight on that choice, see Roland’s post here:

Pure Delivers DirectFlash Fabric: NVMe-oF for FlashArray

What about VMware?

Many of you probably came to this blog wondering about what does this mean for your VMware environment today?

Well today, nothing. VMware currently does not support NVMe-oF. In any form.

But they ARE working on it. As you may or may not have seen, we demo’ed support of this with ESXi at VMworld US in 2018. See the video here:

But you might say, “but I’ve seen NVMe stuff in vSphere today!”. This is true, but not end-to-end support. In this post:

https://blogs.vmware.com/virtualblocks/2018/08/20/nvme-readiness-part-three/

VMware talks about their in-guest NVMe adapter.

But this only delivers a virtual NVMe adapter to the guest, which then translates back to SCSI for VMFS operations.

Search engine sleuths might also find this:

https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-E4ECDD76-75D6-4974-A225-04D5D117A9CF.html

Wow! A VM RoCE v2 adapter! Perfect! Maybe not VMFS, but I can leverage RoCE v2 with my VMs for some high performance workloads!

Sorry, not quite either. This is actually only supported for what RDMA really originally meant–accessing the memory of another VM. VMware engineering has confirmed this will not work with external storage at this time.

What about in-guest adapters? Much like in-guest iSCSI that I know some of you out there do. Well, maybe. This is not something we have put much time into testing. And frankly, I have not at all. But I suppose it is possibly an option for those eager.

In short, we need VMware to add driver support for VMFS, as shown in the earlier video. VMware is certainly working on this–this is evidenced by all of the above mentioned work they have done around NVMe. And Pure is certainly working closely with VMware on this.

So keep an eye on my blog! As soon as it is ready, I will absolutely shout it from the proverbial rooftops.

What about Now?

Of course there are a lot of solutions you can leverage with this today. Check out some of our other blog posts for more info:

  • https://blog.purestorage.com/
  • https://blog.purestorage.com/analyzing-the-possibilities-of-mariadb-and-directflash-fabric
  • https://blog.purestorage.com/oracle-18c-runs-faster-with-directflash-fabric-nvmeof-roce
  • https://blog.purestorage.com/directflash-fabric-vs-iscsi-with-epic-workload
  • https://blog.purestorage.com/analyzing-the-possibilities-of-mariadb-and-directflash-fabric