image_pdfimage_print

Having had the privilege of serving as CEO for seven of the first10 fabulous years, I wanted to take the opportunity to once again offer my heartfelt thanks to the Pure team, our customers, our partners, our analysts, and our investors-we could never have come so far, so fast, so well without you! Working together, we have launched rockets into space, improved the efficacy of cancer treatment, delivered the compelling customer experience of Software as-a-Service, brought the promise of autonomous driving to life, and so on. But just as importantly, today I want to offer some thoughts about optimizing IT for the road ahead, as well as share why Pure’s best years are still in front of us. 

A decade ago, the Puritan team came together to fix storage. We were convinced then (and even more so now) that the legacy storage alternatives were antiquated-too slow, too complex, too fragile and too expensive. After all, most of their storage product lines today are still encumbered by having been originally designed for mainframe or client/server era computing and for mechanical devices, rather than for the hybrid cloud and flash. By rethinking and modernizing the storage platform AND business model, Pure was convinced we could empower customers to get more value from their data. Do that, plus grow the business one happy customer and one happy partner at a time, and we knew we could build the best storage company in the industry.

It is immensely gratifying that 10 years in, Pure has moved into the top five in worldwide market share, giving Pure one of the most auspicious first decades in tech industry history. Since we are growing 10X faster than any of our major competitors (most of whose businesses are declining in share), Pure is poised to rapidly increase our footprint. This makes us not only the innovative choice, but also the safest bet for enterprises looking to future proof their data platform. In fact, we are redefining the definition of “safe bet” away from vendors that were perceived as viable because they have been around for a long time to a vendor that is viable because they are both operating at scale and they are investing to meet their customers’ future needs. 

The road ahead promises to be even more exciting! There is no doubt that this is the most disruptive time in my nearly thirty years in the tech industry―affording both great opportunities for enterprises to accelerate innovation and for partners to leapfrog traditional competitors. However finding an optimal path forward is complicated by so many choices. For your consideration below, I have crafted (no surprise) a top ten list of essential sea changes in IT that enterprises should bear in mind as they plan for the next decade. It will come as no surprise that Pure’s customers and partners are uniquely well-positioned to take advantage of these trends. 

Here we go: 

#10 Artificial Intelligence. With deep learning, programs write themselves from carefully curated datasets. The arbitrarily sophisticated pattern matching that results is a powerful new tool in the predictive analytics toolkit. Such neural networks are already trading stocks, finding bugs in software, increasing yields in agribusiness, and improving medical diagnosis (doctors, after all, cannot train on a million images). 

While I would love to say Pure anticipated the AI revival, the reality was more serendipitous in that Pure’s architecture uniquely affords the high bandwidth, massive parallelism, and easy, elastic scaling demanded by deep learning. That is why today Pure FlashBlade supports some of the largest AI infrastructures in the world, often in partnership with NVIDIA. At Pure, we are ourselves deeply invested in machine learning with Pure1 META™, the artificial intelligence that powers Pure1®, our cloud-automated management platform.

If there is one lesson we have learned in supporting our customers’ as well as our own AI journeys thus far, it is that success depends upon being highly systematic about data life cycle, transformation, and labeling. Best practices in deep learning demands continuous improvement through automated iteration, and the learning that comes out is capped by the quality of the data going in. 

(As an aside, congratulations to Geoff Hinton on his Turing Award win for helping build the foundation for deep learning. I had the good fortune to be a student in Geoff’s Intro to AI class many moons ago. He has been tenaciously pursuing a research agenda in neural nets for thirty plus years in the face of plenty of naysayers-myself included, and it is wonderful to see his work vindicated.)

#9 Cloud. AWS, Azure, and Google Cloud et al. can be seen as the next generation hardware platform, as the successor to the waves of mainframe, client/server and Internet/Web computing. In our view, the killer features of the cloud are ease of use and elasticity-the ability to quickly and cost-effectively scale up and down as needed. But cloud is not just the three three public clouds-Let’s not forget the long list of consumer and SaaS clouds (Apple, Facebook, ServiceNow, Workday), and Service Providers supporting broad customer needs, as well as specialists in AI and ML (Core Scientific, Element AI). Public clouds, private clouds and data centers are being put to work together in a multi-cloud approach to deliver service levels and economics never before possible.

With each new generation of computing, applications have been rewritten to take advantage of a new architecture. Yet application reengineering should only be undertaken for business benefit and never solely to re-platform. Application development is expensive, risky and takes time-witness how many enterprises continue to depend on their mainframes. The corollary is that other than specific use cases (see below), “lift and shift” of existing non-cloud apps to the public cloud tends to yield lower return on investment than native cloud development-in fact quality of service can drop while costs increase. 

Of the storage vendors at scale, Pure is by far the most aligned with the cloud: 

  • Nearly a third of our revenues are derived from cloud native organizations- larger consumer Internet and SaaS vendors who have found that specializing their own next-gen data centers to their application’s specific requirements yields infrastructure that is faster, more reliable and more economical than the public cloud. (Public cloud, then, often complements their in-house efforts for elastic use cases, or to cover geographies where their own data centers are not cost effective). Pure started by bringing cloud-like automation, management and consumption experience to the data center, empowering these customers to rapidly and cost effectively scale mission critical applications while simultaneously innovating in data. 
  • Pure also provides software in the cloud. For example, Pure’s Cloud Block Store(now available through the AWS Marketplace and Pure directly) provides high-performance block storage on flash in the public cloud with enterprise-grade availability and reliability. For many workloads, Cloud Block Store can even pay for itself, due to the much lower overhead of RAID (relative to mirroring) and data reduction (Pure averages nearly 5X). Last month at the Pure user conference Accelerate, we were thrilled to have our partner AWS on the main stage and they shared this: “With Pure’s Cloud Block Storage, Pure has done all the heavy lifting for you. You get to extend the amazing customer experience you’ve been having with Pure on-prem, right into the cloud.”

As our Chief Architect, Rob Lee, said during his Accelerate keynote: “Pure’s goal is both to make the enterprise data center more cloudy, and the cloud more enterprisey.”

#8 Disaggregation of compute & storage. In the cloud early days, hard drives were co-located with compute within the server chassis (the so-called DAS model). As compute and networking scaled out, shared storage, thanks largely to the constraints of mechanical disk, fell woefully behind. As a result, eight or even 16 hard drives were inserted into a server chassis by the cloud pioneers.

Today through innovations in cloud architecture and by companies like Pure, in combination with flash memory, dedicated networked storage is once again supplanting DAS (just as it did for the mainframe and client/server generations). By disaggregating compute and storage, each can be scaled and refreshed independently. AI, with its ever-evolving mix of traditional CPU (for model execution, data ingest/egress & transformation) and highly parallel processing (GPU or equivalent for model training), has become the poster child for disaggregation! 

Conversely, hyper-converged infrastructure (HCI), which grew out of the DAS model, finds itself increasingly mismatched with modern clouds-it falls behind in easy independent scaling, performance, CPU utilization, reliability, and the lower cost afforded by disaggregated infrastructure. No doubt, HCI will continue to have a sweet spot in remote and branch office and other smaller deployments, but specialization wins at scale. 

#7 Fast local area networking. Faster networks are making disaggregation possible. Ethernet, in its 47th year with no end in sight, now affords comparable latency and greater bandwidth than the PCIe bus within a server chassis! The effect is to turn data center architecture inside out, with racks and pods replacing server chassis as the building blocks, enabling all compute and all storage to be equidistant from one another across the local network. As we have remarked before, finally the network is the computer

Pure’s end-to-end NVMe takes full advantage of fast networking. Compared to DAS, perhaps the biggest benefit of NVMe over fabrics is that offloading storage processing from hosts enables more of the CPU to go to applications-making the constrained server tier substantially more efficient. Moreover, networked storage can more than pay for itself via richer data services (consistent snapshots & replication groups) at lower cost (higher utilization, better compression, broader deduplication, and RAID replacing mirroring). This architecture gets even more compelling with Pure’s new DirectMemory™ shared storage-class memory (SCM) caching.  Instead of installing SCM within each server, why not share it across all servers at NVMe speeds?

#6 Edge computing. While local area networks have gotten dramatically faster, wide area networks have not kept pace. Just as the cloud drives data center consolidation, the explosive growth in data relative to slower growth in WAN capacity is forcing more compute and storage to the ‘edge’. By Cisco’s accounting, the world will produce roughly 50 zettabytes (ZBs) of new data next year, perhaps twenty times more than can flow across the Internet. Data’s increasing gravity will ensure that most data produced in the cloud stays in the cloud, but also that most produced on the edge will stay on the edge, with only fractional subsets able to be sent to the cloud. 

Co-processing between cloud and edge or on-premises data centers, then, will be the norm for the foreseeable future, and the winning storage platforms will facilitate data sharing in hybrid clouds. To this end, with CloudSnap,™ Pure was among the first to provide for the auto-migration of on-premises data to the cloud (on AWS and Microsoft Azure).

#5 Flash and Moore’s Law. With TLC and QLC (three and four bits per flash cell respectively), combined with Pure’s best-in-class data reduction, flash memory is now poised to supplant slow/capacity disk, just as it did performance hard drives before it, first in TCO (happening now) and then in procurement cost. Flash provides a 10x advantage in reliability, power efficiency, density (racks down to rack units), and performance. Eventually, I believe flash will compete with the cost of tape (which, in my humble opinion, is likely to live longer in the data center than mechanical disk). 

Again, Pure is at the forefront of these trends. Only Pure’s software was designed from inception to take advantage of the massive parallelism within each flash device. Several years ago, we began enabling our Purity software to speak directly to raw NAND, an architecture called DirectFlash™ that was achieved by engineering our own NVMe flash modules and moving flash management into software that operates globally instead of inside each SSD. DirectFlash is unique in the industry, and has proven to provide for higher parallelism, lower latency, more consistent performance, as well as greater reliability, and doing so at lower cost (eliminating the 20-30% overprovisioning of the typical SSD). Pure’s innovations have enabled us to far more rapidly embrace new media — NVMe, TLC and QLC as well as innovations to come, increasing capacity and density while further cutting costs with the same proven Purity software.  

#4 High availability without service interruption. If internet services from Amazon, Google, Facebook, Microsoft or Apple go offline, it can make front page news. SaaS and other enterprise applications are increasingly (and justifiably) also expected to be always on.  

Pure has consistently delivered greater than 99.9999% FlashArray™ uptime to the applications we serve (that is less than 32 sec. of service interruption per year on average), and unique to Pure, that is accomplished without maintenance windows. Pure is serviced and upgraded nondisruptively, for both software & hardware, and while it is fully loaded! We do not ask our customers to take their data offline for maintenance windows (and we think that vendors who do should count that downtime against their uptime-what good is 6 9’s of availability if my vendor can demand multi-hour maintenance windows?)

Crucial to delivering that quality of service is our Pure1 cloud automation, which dramatically reduces the complexity of managing 10s or even 100s of PBs of storage. Pure1 utilizes predictive analytics and AI-derived from telemetry data on over 15,000 arrays monitored globally to correlate, anticipate and fix issues before they happen, preventing dozens of level one severity issues annually. With Pure1, customers are approaching a self-optimizing, managed service within their own data center, albeit one in which no configuration changes happen without their blessing. 

#3 Evergreen™ subscriptions. Software-as-a-Service and public cloud are, of course, evergreen, in that provided a customer maintains its subscription, the vendor is responsible for upgrading hardware and software, adding new features, increasing performance, all without any disruption in service. Our belief is that in the future all on-premises infrastructure will be as evergreen as the cloud and SaaS are today. 

Looking back on our first decade, Evergreen storage is one of the most disruptive (or perhaps non-disruptive, depending on your point of view ;-)) things that Pure has done thus far. It is also something that has proved nearly impossible for our competitors to copy. Vendors simply must protect customer investment, engineering in forward compatibility in perpetuity, so customers need never again deal with escalating maintenance fees for technology approaching obsolescence. And never again are customers forced to repurchase the same technology every few years and disruptively migrate their data-after all, storage should be subordinate to data, not the other way around. By avoiding those fight or flight decisions, happy Pure customers stay customers: Pure’s evergreen subscription covers new software and new hardware, all for flat and fair subscription fees. In fact, we have found Evergreen increases existing customer’s willingness to invest in increased capacity and increased performance for existing configurations, knowing that they will never be dead-ended again. And it is this combination of technology and business model innovation enshrined in our Evergreen program that have enabled Pure to provide Storage as-a-Service-100% subscription pricing for those customers favoring cloud-like OpEx over CapEx across all of our products (cloud & on-premises). 

#2 Self-driving operations. In the 21st Century, no customer should be buying technology that comes with boxes of manuals that they have to train on for months before they are proficient. Public cloud has rightly reset the bar for enterprise’s tolerance for complexity in their own data centers. Such complexity is not just an impediment to innovation and added cost. Rather it imparts fragility and unintended consequences that leads to failures. The future no longer belongs to endless configuration options and tuning knobs, but rather to transparent capabilities that tune themselves to meet the enterprises needs. 

Such simplicity needs to be engineered in at the start and very carefully safeguarded as a platform matures. The corollary is it is nearly impossible to take a product that has grown complex over decades and make it simple. Thanks especially to our founder Coz, Pure has maintained our focus on making complex things supremely simple (in the early days, Coz’s young children were drafted to test drive Pure installation and configuration with no manuals). 

My guess is that Pure’s simplicity is the primary contributor to Pure’s industry-leading customer satisfaction. Since our early days, Pure has invested in third-party auditing of customer satisfaction via NetPromoter Score (NPS). Pure has consistently increased its certified NPS score in the past four years to 86.6 rating today. This is a particularly satisfying response to some of our competitors who have said things along the lines of “Pure is small and smart, but when they get big they will suck too.” Instead, Pure is only getting better. 

I believe this simplicity is also the primary justification for my single favorite thing to hear from our customers: That Pure is quite simply the best technology they have ever brought into their data centers. 

#1 Delivering a Modern Data Experience. The unit of deployment of application code used to be fairly large, and together with the data it utilized, deployed in relatively monolithic silos. Today, the data used by applications has grown substantially larger, making it more expensive to store multiple copies for different applications. And as businesses have become more real-time, they need to access the most current data rather than a potentially stale copy. While data has grown in size, the unit of application deployment has shrunk: With the Internet and SaaS, finer-grained updates to the application logic can be deployed daily or even hourly. As a result, the picture is being turned inside out with application logic increasingly subordinate to ever larger databases. As Pure’s CEO Charlie Giancarlo is fond of saying, we call them “data centers” for a reason.  

No doubt this shift in architecture places new demands on the underlying infrastructure: storage and compute must scale independently, requiring disaggregation (see above). The architecture and the pace of change also necessitate finer-grained application deployment models such as containers (Pure has embraced Kubernetes & OpenShift) and serverless models. Most of all such data-centric architecture demands new database designs-as an entrepreneur, it has been exciting to watch Snowflake’s early success directly competing with the cloud giants.

Ultimately, organizations are moving away from legacy storage solutions, where a multitude of specialty devices create copy-data sprawl and data silos. They are rejecting excessive system complexity and technical debt that hinder data infrastructure modernization efforts.

Pure turns fragmented and antiquated data storage into a unified, automated, multi-cloud data utility. Solutions are fast, flexible, multi-purpose, and easy to use, delivering a consistent experience no matter where or how data is utilized — this is what we call a modern data experience. A modern data experience simply demands a new storage platform and a storage-as-a-service model to enable organizations to extract more value from their data while reducing the complexity and expense of managing infrastructure.

Summing up. The center of gravity in the data center is shifting to data, and enterprises are in critical need of a far more effective data platform, particularly for predictive analytics including AI and deep learning. 

As we said at the outset of this blog, to date Pure has benefited hugely by leveraging these ten tech trends to deliver a more innovative data platform. But going forward, Pure now has the scale and global reach such that our innovations can help to accelerate these transformations! (After all, having far more cash on hand than debt allows us to invest in research and development rather than paying interest on loans.) You can see this in Gartner’s most recent Magic Quadrant: Since inception, we have set the bar for innovation, but during the last two years we also led in our ability to execute, and for 2019 that’s across all storage not just flash. Today, I am convinced Pure is both the most innovative and the safest choice in enterprise storage. Would you rather bet on a proven data platform whose best years are ahead of it or one whose best years are well behind? 

I am personally thrilled to have a front-row seat as Pure continues to execute on our mission to help enterprises maximize value from their data. I remain on the Pure board of directors, now as Vice Chairman, and continue to spend time with our customers, our partners, and most of all, Puritans worldwide-who are far and away the best in the industry. The next ten years and beyond hold great promise for all of us. It is an honor and a privilege to be part of your team. 

God speed to the Orange!

-Dietz