image_pdfimage_print

Ten years ago the founders of Pure Storage launched our company; and from the start, our mission has been clear, remove complexity and make storage simple – full stop. Throughout the decade our Pure1® predictive, proactive, AI-driven platform has been a critical enabler of this simplicity, allowing clients and partners a simple SaaS-based management toolset that is consistent across the entire portfolio.

Six weeks ago we “painted Austin orange” at //Accelerate 2019 and I am still feeling the buzz from the event! While I was down there I had the privilege of taking main stage and sharing some exciting announcements around Pure1®. Specifically, how it enables us to help our customers to simplify the management of their data estate, shed technical debt, and prepare for the next level of storage optimization in the multi-cloud world.

Double-clicking on what I shared at //Accelerate, I’d like to share five key thoughts and observations around the opportunities that lie ahead for us with respect to helping enterprises extend and optimize their applications, their infrastructure, and their data in the hybrid world. I hope you will find these helpful as you start to embrace and extend the people, process, and technologies beyond the walls of your own data center.

1. We’ve come a long way from the days of proprietary pools of infrastructure and private data centers. I can still remember when started my IT career in the mid 1990s…The data center was siloed into Mainframe, mid-range UNIX and the up and coming x86 server environments. Applications and their respective data were tightly coupled with their infrastructure and any and all attempts to share or move them required custom development and dreaded “middleware.”

Over the next few decades, the rise of x86 was powered by software including the Linux and Windows OS as well as innovative platforms like VMware’s ESX hypervisor allowing organizations to finally bring their applications (and their data) together on a common infrastructure platform.

Fast forward to today. With the rise of the Internet and now the public cloud we now have more choices than ever on where to deploy our applications including the data center, through MSPs and the cloud itself. We even have a full range of platforms to help move virtual and container-based workloads between clouds. The breadth of architectural options is inspiring but also intimidating…

2. With all of this choice comes complexity. In the early days of the cloud, the key questions were “Who would win the cloud wars” or “Would this mean the end of the data center era” but over the last few years the dust has cleared and we are now on the cusp of the “multi-cloud era,” where data and applications will live in harmony from the Edge to the Core (data center) to the Cloud (managed and public). To support this, compute and network virtualization have come a long way to driving workload portability between the data center and the clouds and modern application platforms have accelerated the rate of application development and innovation.

This “consumerization of IT” has not only driven an exponential increase in block data volumes tied to critical relational databases but also file and object data driven by the rise of rich media, log and streaming analytics. The pace of innovation is accelerating!

3. Companies with the deepest customer insights will have a major advantage. With all of this though comes a need for traditional IT to extend the perimeter of its’ management tools beyond the data center and into the cloud. Specifically, data management tools that can accelerate deployments, predict failures, and optimize data placement across a wide range of platforms. This optimization must also run beyond the traditional vectors of price/performance to include geo locality (to manage for network latency or compliance) and protection levels (to manage for RPO/RTO).

The reality is this cannot be accomplished with traditional siloed infrastructure management solutions. A new breed of management tools are required that can collect, aggregate, and analyze metrics for all workloads and all data whether on-premises or in any cloud with no compromise.

4. AI and machine learning will be critical enablers and will augment, not replace human abilities. Of course, none of this will be possible without a powerful AI engine that can comb through the reams of telemetric data spotting trends and driving actionable recommendations. This AI engine must thrive and grow as its data set expands delivering deeper and more relevant insights to infrastructure architects who embrace their newly acquired “superpower.”

5. Optimization will drive the next level of efficiency and power the next era of investment – Although we are still early in this cloud journey, we are seeing a strong desire by both business and cloud providers to optimize infrastructure costs and allowing for even greater agility and increased investment. Believe me, it won’t be long before the concept of “Data arbitrage” becomes the next hot topic!

At //Accelerate 2019, we shared its vision and strategy with respect to our core management platform Pure1® powered by Pure1 Meta™, our AI engine for storage optimization. In the last 10 years, we have extended this platform to our client base with great success. Today we analyze over a trillion data points per day across 15,000 systems! Our newest announcement of VM Analytics Pro and Workload Planner allow us to provide even greater insights looking at specific workloads, providing insights and making optimization recommendations. The future is here, let’s embrace it!