In this post, I’m going to look at one of the most frequently misunderstood and misused commands available in the Cinder Block Storage project of OpenStack® is cinder ...
The past year at Pure has been all about redefining the all-flash array market, delivering our cloud-era all-flash arrays: FlashBladeTM for unstructured data and the 100% NVMe FlashArray//X for structured data workloads.
With these platforms in place to drive innovation our engineers now get the chance to flex their software muscles, as we deliver the largest software launch in Pure’s history, bringing an unprecedented 25+ new features to market, as we re-define Tier 1 storage for modern cloud workloads, deliver native cloud integration, extend FlashBlade scale and performance dramatically, pioneer a new world of fast object, and deliver a new vision for self-driving storage!
The cornerstone of this launch is major updates to our flagship software, Purity, delivering Purity for FlashArray 5.0, and Purity for FlashBlade 2.0:
There’s a lot to share here, a ton of innovation delivered to Pure customers in our customary Evergreen fashion. This blog is the first in a 10-part series, giving you a broad overview of what’s new and the opportunity to click deeper into each new feature to learn more. In the weeks following Pure//Accelerate, we’ll publish deeper dive blogs from our engineers and architects exploring the 25+ new features in more technical depth. Enjoy!
Cloud-Era Flash: “The Year of Software” Launch Blog Series
Click, read, and watch…there’s a lot to learn and enjoy!
Pure’s vision is to help customers put their data to work, by delivering an end-to-end data platform that provides the effortless and scalable block, file, and object storage services necessary to run classic applications, test/dev, big data analytics, and modern webscale apps – all with the speed and efficiency of flash.
Our data platform is powered by software, Purity, delivered by our cloud-era flash arrays, FlashArray//X, FlashBlade, and our converged offering FlashStack, and is seamlessly managed by Pure1®, our SaaS-based management and support suite, made continuously smarter by Pure1 META AI.
In today’s announcements, we’re going to extend our vision in three important areas. We’ll start by defining a new vision for Tier 1 storage, delivering the highest reliability, but rethinking Tier 1 for the cloud era and ending the compromise between storage reliability and innovation. Second, we’ll lay-out a vision for how Big Data transforms to Big Intelligence, as we all learn to harness real-time analytics, AI, and machine learning to gain insight from our massive data stores. And finally, Pure’s vision is to extend our simplicity to deliver truly self-driving storage, using advanced AI to take effortless to a whole new level.
Traditional Tier 1 storage has long been the “high ground” of reliability delivering 99.9999% availability and sophisticated (and complex) integrated options for metro and global replication and disaster recovery. Unfortunately, it’s also been expensive, inefficient, complex, and not necessarily the poster child for fast innovation. On the flip side, modern all-flash arrays have driven cloud-era storage innovation for the past five years, bringing data reduction, simplicity, performance, and more recently cloud integration and NVMe flash innovation. If you look at most of our competitors – their product lines make you choose between the highest reliability and modern innovation:
At Pure – we believe this is a false choice, and today we’re introducing a set of fully-integrated features which end the compromise and raise the bar on Tier 1 storage.
Tier 1 storage starts with reliability, and so it is fitting that so does this blog post. We just passed the two-year anniversary of introducing FlashArray//M, and we’re proud to announce that FlashArray//M has achieved now two years of 99.9999% availability.
What’s more – FlashArray is EvergreenTM, meaning that this availability is delivered not only through normal operations, but across software, hardware, and generational upgrades like the move to the new 100% NVMe FlashArray//X.
Synchronous replication and metro clustering remain the pinnacle of storage reliability, but unfortunately they are often also the pinnacle of expense and complexity. With signature Pure simplicity, today Pure is introducing ActiveCluster, a true active/active stretch cluster solution, fully integrated into Purity, and available at no additional cost as an Evergreen upgrade!
ActiveCluster redefines simplicity for metro disaster recovery, leveraging the Pure1 SaaS-delivered cloud mediator, removing the need for a 3rd site witness. With only 4 steps to configure and 1 new command added to Purity, this is a whole new level of easy! Learn more about ActiveCluster’s simplicity, management, and three-datacenter options in our deep dive blog.
Tier 1 arrays are designed for consolidation, and with FlashArray’s efficiency, we continue to see customers collapse multiple tiers of storage into a single, dense, efficient FlashArray. But not all workloads are created equal, and sometimes you want to define which workloads should get the highest I/O performance in times of contention. QoS has existed in the storage world for over a decade to solve this problem, but it’s been plagued with complexity; mis-configuring QoS can often cause more harm than good! In our previous Purity release we introduced Always-on QoS to protect against noisy neighbors, and in this release we extend QoS further with rich policy features:
Policy-Based QoS now adds the ability to configure performance classes, easily allowing for differentiation between Bronze, Silver, and Gold workloads. Or policy limits enable fine grained control of performance on a per-Volume basis, ideal for service provider and multi-tenant cloud deployments. Learn more about Policy QoS in our deep-dive blog.
VVols and VMware’s storage policy-based management (SPBM) promise a great leap forward in managing storage for vSphere-based cloud environments. VVols enable storage arrays like FlashArray to have per-VM visibility, enabling our powerful snapshot, replication, QoS, and migration technologies to work on a per-VM level of granularity. But despite the clear value, it’s safe to say that industry adoption of VVols has been slow, due mostly (in our opinion) to the poor and complex support for VVols in the storage industry. It’s time to experience VVols on Pure.
Once again, it’s all about simplicity. Purity now embeds the VASA Provider to enable VVols right on the array making it highly-available natively. VVols can be easily implemented in under 5 minutes, FlashArray makes VMFS → VVol migration nearly instant, and VVols opens up some really exciting open options for making VVols accessible in and out of vSphere. Learn more and see a demo on Cody Hosterman’s deep dive blog.
This one’s a biggie! Purity already has super-powerful space efficient snapshots, which customers make heavy use of for backup, recovery, and test/dev use cases. And Purity’s built-in replication enables snapshots to be used for disaster recovery as well. But increasingly customers have asked for the ability to extend Purity’s snapshots for lower-cost offsite retention – so now Purity snaps seamlessly extend to a host of new off-FlashArray destinations – all managed seamlessly with Purity’s protection policies.
To make this magic happen, Purity Snap now includes a new Portable Snapshot format, which is a snapshot that embeds its recovery metadata. Snaps can be moved to FlashBlade, as well as any generic NFS target, in case you have an old NetApp filer or Data Domain target you want to get some more use out of. Seamless management, simple recovery – it just works. Additionally, and perhaps most excitingly, we’re also introducing CloudSnap – the ability to take these snapshots to the Public Cloud to enable backup, DR, migration, and development use cases. And finally, we’ve created an open DeltaSnap API enabling a rich ecosystem of best-of-breed heterogenous data protection vendors to integrate natively with Purity to manage and move our snapshots while preserving space efficiency. There’s SO MUCH goodness here that we can’t nearly do it justice in this blog – read our deep dive to learn more.
We’re NOT going to call it hyper-converged…but you might. Purity Run is all about creating an open development platform – enabling our customers to integrate FlashArray into their webscale architectures in new and exciting ways – we can’t wait to see what you do with it!
What if you wanted to implement a custom storage/messaging protocol so your application could talk to FlashArray in an optimized fashion? What if your Edge device wanted to analyze IoT data locally before transferring it home? What if you wanted to execute the analytics micro-services in your data pipeline right next to storage for better performance? What if your storage array could snap-in directly to your Docker/Kubernetes environment to host containers itself? All this and more is possible with Purity Run. Run delivers a dedicated set of CPU and Memory resources, with complete security and performance isolation, to run your custom VMs and containers right on FlashArray in a highly-available architecture. Developers, FlashArray is now open to you! Learn more about Purity Run in our deep dive blog.
Purity Run allows both you and us to extend FlashArray in interesting ways. A frequent request we get is for file consolidation onto FlashArray where often customers have a primarily-block deployment, but have some file data that they’d like to bring onto the platform. As we surveyed the market, the most robust, feature-rich and simplest SMB implementation out there was Windows File Server, so in partnership with Microsoft we brought WFS to FlashArray.
Let’s be clear about this – our large-scale NFS offering is FlashBlade – and you can read more about exciting FlashBlade developments below. But for consolidated/unified deployments, WFS adds mature file services to FlashArray, and snaps right into the Microsoft management ecosystem. WFS for Purity is fully-supported by Pure, and is an ideal solution for file sharing, home directories, collaboration, and VDI user files, and enables you to leverage your existing Microsoft Windows Server license agreement. Learn more about WFS for Purity in our deep dive blog.
We made a splash last month, introducing FlashArray//X and our software-defined DirectFlash Modules, bringing to market the industry’s first 100%-NVMe AFA for mainstream deployments. The excitement around FlashArray//X has been overwhelming, as it both ups the bar for performance in Tier 1 enterprise workloads, but also enables targeting webscale DAS flash workloads with top-of-rack deployments. As we launched //X, we got a few persistent questions: how do you expand an //X beyond the chassis, and what about using NVMe/F to connect to hosts? Well, today we’re providing those answers.
First, we’re announcing the DirectFlash Shelf (DFS). DFS extends our NVMe architecture to expansion shelves, which are connected to the //X chassis over native NVMe/F 50 Gb/s Ethernet. DFS delivers 28 DirectFlash Modules in a 3U expansion chassis, providing up to 512 TBs of raw flash (1.5 PBs usable) in 3U when using our 18.3TB DirectFlash Module. Now that’s dense! DFS will be available in Q4, but you can start the NVMe journey with FlashArray//X today!
And at //Accelerate we’re also doing a joint demo with Cisco, showing end-to-end NVMe, connecting a Cisco UCS Server via NVMe-over-Fabrics leveraging RoCEv2 at 40 Gb/s to a FlashArray//X. This provides an all-NVMe path from server to shared flash chips.
NVMe/F is still a relatively young standard, so we haven’t yet announced timelines for GA productization of this solution, but you can see it’s working, so stay tuned.
We’ve seen a ton of excitement from our cloud service provider and SaaS customers about this combination…removing the pain and expense of server-DAS flash and graduating to top-of-rack storage. Imagine building your cloud by stamping-out a rack with this level of density, providing all the scalable block, file, and object services your applications need….we have customers doing this today:
So there you have it – that’s our vision for the new Tier 1 – all the innovation you need to build tomorrow’s cloud-scale applications, and modernize your classic applications, while taking reliability and simplicity to a new level.
We’re delivering all of this as a set of Evergreen upgrades to our FlashArray customers in 2017, and we’re including it at no additional cost. See below for detailed information on delivery timelines, as these features will roll out across a set of Purity//FA 5.x releases.
And that’s only the first 1/3rd of what we have to share…read on for FlashBlade and Pure1!
We’re in the midst of one of the most exciting times in the world of data. We’re just beginning to learn how to take advantage of data at scale and to make analytics real-time, and out of nowhere AI and ML-based approaches now promise to stretch what’s possible beyond the levels of human cognition. An exciting time, and a time where the insights that are possible with Big Intelligence are inextricably tied to storage (storage size, speed, and efficiency).
What’s universal about this new world of big data and big intelligence, is that massively parallel is the new norm.
These new data applications are built on a massively parallel architecture, taking advantage of multi-core CPUs and GPUs, but they’re being starved by a previous generation of storage. THIS is why we built FlashBlade.
We first introduced FlashBlade at //Accelerate last year, and what an incredible year it’s been, a year of changing what’s possible with data. FlashBlade now powers leading AI supercomputers, enables the design of the fastest race cars, underpins massive webscale architectures, and helps design, simulate, and run next-generation planes, trains, automobiles, and rockets.
What’s been universal about these use cases – they need more…more performance, more scale, and more flexibility. So today we’re announcing a major software update to FlashBlade, Purity for FlashBlade 2.0, significantly exceeding even our own expectations.
FlashBlade’s rapid adoption surprised us….both in terms of the exciting use cases, as well as the speed with which customers asked us for more. Our 1.0 FlashBlade release supported true linear scale-out to 15 blades, this release delivers on 5X that!
The magic of FlashBlade is it’s true linear scale – adding blades improves capacity, I/O performance, metadata performance, and host connectivity bandwidth. So in this release the stats go from impressive to unbelievable: 8PBs in a single namespace, 75 GB/s read, 25 GB/s write, and 7.5M IOPS.
If you remember last year’s launch where we promised 1M NFS Ops, you’ll see that I/O performance has actually grown more than 5X, as we’ve both scaled-out and exceeded our initial performance expectations via software improvements. A new 100 Gb/s integrated and fully-managed software-defined fabric delivers this new level of scale, and don’t worry, in case you don’t have super-wide racks, it’s available in standard rack-mount deployments as 5 unified 15-blade chassis:
And remember, FlashBlade is truly simple scale-out…you can expand one blade at a time, starting with 7 blades and growing seamlessly to 75. Learn more about 75-blade scale in our deep dive blog.
Cloud storage is primarily object storage; we’re teaching our next generation of developers to think “object first” when designing anything. But to date, object storage has also been really slow….optimized for cheap, deep, and slow use cases. No more…FlashBlade is now introducing Fast Object – ready for a whole new set of high-performance data and cloud use cases.
When we say fast, we mean really fast. It’s over 10X faster than cloud object stores, and one of our webscale beta customers found that it was 100X faster than their on-prem disk-based object store at indexing 1M of their image objects.
Part of what makes FlashBlade Object fast is that FlashBlade was born for object, literally….FlashBlade is built on a ultra-scalable back-end object store itself, and now both our File (NFS/SMB) and Object protocols are peers sharing the native object store, not nested on top of one another as a retrofit.
Developers – this one’s for you. Learn more about FlashBlade Object in our deep dive blog.
In addition to the exciting large-scale simulation, analytics, webscale object, and AI/ML use case we profiled above, we’ve continued to see FlashBlade adoption across a wide set of more mainstream use cases, including healthcare PACS imaging, large-scale software development, financial analysis, data warehousing, media/game production, and even police body cameras!
Key to broadening FlashBlade’s use cases is adding more traditional “enterprise” features that IT admins expect. In the Purity//FB 2.0 release, we’ve introduced a whole set of these features:
Native SMB and HTTP support extends protocol support, IPv6 enables modern networking, NLM enables NFS access for clustered applications, LDAP simplifies secure administration, and most importantly, snapshots enable quick backup/recovery of file systems.
5X scale, 5X performance, Fast Object, and Enterprise feature maturation…that’s Purity for FlashBlade 2.0!
We’re experiencing rapid technology advances in our personal lives as automobiles become more autonomous, and we think there’s an exciting parallel vision for how AI technology can come to the world of storage.
If you look at the journey to building a self-driving car, there’s three important things to realize:
The parallels to storage here are clear. First – we’re already well-down the path of autonomous storage…and at Pure we started on this journey from day 1 by simplifying everything, removing user interactions, and having our arrays make decisions on their own. Second, the hard part of being a storage admin isn’t managing the array (at least if you’re a Pure customer!), it’s dealing with the world around the array in terms of changing and unpredictable workloads. And third, Pure, as a global storage vendor, has the ability to see and understand workloads across our entire user base…can we learn from everyone’s workloads to help better understand and optimize your workload? We think so.
It’s with that backdrop, that today we’re introducing Pure1 META – Pure’s AI engine, providing global predictive intelligence to help better manage, automate, and support storage.
Pure1 META starts with, and is built on data. For years now, Pure’s taken our role as an IoT company seriously. We’ve built a global network of 1,000s of connected arrays, delivering performance and operational metadata back to Pure now for years, and we’ve been hard at work making that data useful to customers, and to our Engineering team to build better products. Pure1 META now collects over 1 Trillion data points/day, constantly feeding our data lake.
Last year we introduced Predictive Support, using Issue Fingerprints to constantly scan our global user base for issues in real time.
Our goal was to find problems upstream before they became real issues, and detect changes to the environment which could introduce problems automatically. Since launch, we’ve detected and prevented over 500 potential Sev1 incidents, and our fingerprints continue to get smarter and smarter. Today, these fingerprints are created by humans and are based on known issues. That’s today. Pure1 META’s AI engine opens-up the next part of this journey, enabling ML intelligence to find problems and create fingerprints based upon issue and data correlation that may not be perceptible to humans…that’s a little bit of what’s coming tomorrow.
But the really exciting part of Pure1 META is the opportunity to better understand workloads, both in terms of capacity and performance. For years, performance sizing has been a black art. Actually, that may overstate it..it’s been more like a finger in the wind.
Performance sizing is nearly impossible, as there are tons of variables to understand (IOPS, bandwidth, latency, IO mix, locality, reducibility…just to name a few). It’s literally an impossible (human) problem, so humans have been left to do one of two things: dramatically over-size storage (the usual approach, leading to lots of wasted spending), or accidentally undersize-storage, leading to performance contention or even downtime.
So if I have an array, and I want to know how many workloads, of a certain type will “fit” on it…how do I do that? This turns out to be a PERFECT problem for machine learning. We turned Pure1 META loose on our database of 100,000s of individual workloads, looking at all the performance data we collect within Pure1, which is >1,000 performance measures and variables. Would IOPS be most important? Bandwidth? Latency? Pure1 META then created a workload performance signature using all these variables, a concept that we call Workload DNA:
Workload DNA can be used to understand “fit” of various workloads on various storage arrays…both today, and over time as they grow. How will the workloads on my array grow? Will I run out of performance or capacity? Will a new workload fit on the array? These are all questions that Pure1 META can answer, today.
We’re busy plumbing Pure1 META into the fabric of how Purity operates, and you’ll see us take advantage of it in many ways. But the first, is by introducing a new Pure1 tool, called Workload Planner:
Don’t be fooled by Workload Planner’s deceptively simple UI – there’s a lot going on underneath. Workload Planner lets you predict your workload’s growth over time by simply dragging a slider. Underneath, Pure1 META is using Workload DNA to understand each of your workloads, using information from our global database to make the most accurate projections possible on performance and capacity growth. And every day with every new workload it understands, it gets smarter.
Pure1 META is the beginning of a very exciting journey towards self-driving storage. By truly understanding workloads we can predict, we can take action, we can optimize…we can make storage more reliable and truly effortless.
Pure continues to strive to make Pure1 data useful for customers in exciting ways. As our user base grows, customers now have 10s to even 100s of Pure arrays. Today we’re announcing the new Global Dashboard, which aggregates capacity and performance information across your entire all-flash fleet for global visibility.
Global Dashboard, like all of Pure1, is SaaS delivered…so it’ll just be there soon when you login to Pure1.
So hopefully by now you get a sense for why this is our largest software release ever at Pure, by a long shot. Today we’re announcing features that are shipping over the next six months as part of our Purity//FA 5.x and Purity//FB 2.x release trains, which means that features will roll-out throughout the year. Here’s a comprehensive list of all the features, and when delivery is expected:
As always, feature timing and specifications are subject to change, and some features will be released as DA (Directed Availability) prior to full GA release.
Thanks to all of our customers who join us on this journey – we look forward to working with you on these features in 2017 and beyond, continuing to deliver rapid innovation in an Evergreen fashion!