Part 1: Getting More From the Data Center

Many vendors claim their products are the key to a more efficient data center, but there are more considerations than just implementing new technologies to achieve that.

Hyper Efficient Data Center

8 minutes

Summary

In the quest to create a more efficient, modern data center, there are several elements enterprises should consider when trying to evolve their IT operations.

image_pdfimage_print

It’s a tale as old as time—a technology vendor will declare, with a lot of supporting slides and spreadsheets, their portfolio is the only secret weapon needed for evolving a data center to peak efficiency and align with ever-changing business needs. These claims, however, don’t paint a full picture; there are other elements that must be considered when evolving to a more efficient operation.

What Is Data Center Efficiency?

Modern data center operations often have terms like “efficient” and “agile” spoken in the same breath. What do those terms mean? Different people will have their own interpretations:

  • Management equates efficiency and agility to saving money.
  • Many vendors claim implementing their products will make things better/more efficient/more agile without actually explaining what that means.
  • Administrators need to be more operationally efficient, since most wear multiple hats of responsibility.

All of the above are competing goals that are not easily conquered with a silver bullet. Achieving operational efficiency requires a broader organizational commitment involving three variables: people, process, and technology. There are always trade-offs for each that must be made to achieve a more streamlined operation. 

people process technology

A Bit of History

Roughly 70 years ago, computing served a single purpose with mainframes doing the number crunching and statistical calculations for important things like getting astronauts to the Moon and back, and enterprise storage management was relegated to managing reel to reel tape libraries for data production and retrieval. 

Disk technology’s introduction and storage media’s evolution revolutionized how data could be managed and accessed. Capacity, speed, and reliability of data storage rapidly improved from floppy disks, to hard disks, to today’s modern flash drives. Tangentially, storage systems also evolved from direct attached storage to storage area networks (SANs), network attached storage (NAS), hyperconverged, and what is available in the public cloud. 

Legacy technology was never designed to keep up with this kind of modern expectations.

In most cases, this storage genesis made system management more complicated as they scaled, with legacy storage solutions initially designed for a single purpose becoming increasingly difficult to deploy and maintain. Additionally, ever-shifting workload demands with these systems became complicated to support, despite legacy vendors repeatedly claiming every iteration of their portfolio is “improved”. Their claims often proved dubious – how can data center operations become more efficient when they were built with core storage products designed 30 to 40 years ago? Modern data center operations have evolved – planned and unplanned outages have direct impact on business operations, so maintenance windows have become razor-thin and unexpected outages are not tolerated. Legacy technology was never designed to keep up with this kind of modern expectations.

Since most of what is currently operated and managed in the data center is based on technology and concepts that are decades old, how can you pivot to create a more efficient, modern data center?

Data Center Tenets

Fixing data center inefficiencies is complicated and can be clarified on three independent axes: data mobility, consolidation, and autonomic operations:

Hyper Efficient Data Center

Data Mobility

Data Center Efficiency Maxim #1: Workload and data proximity matter.

The core function of an IT organization is to support production workloads. And, as more organizations adopt a hybrid cloud approach to their infrastructure, workloads may not all be contained to a single geographic location. The same is true for data – it can live locally, in an S3 bucket, in a file on a file system, or somewhere else. Macro-level management is key to providing global oversight of an enterprise to know where the data is, what workloads it services, and where it should be located for maximum effectiveness.

This is where Pure1® excels.  Its AIOps storage management platform provides recommendations and forecasting by analyzing data adoption and storage configurations. And while some of the data migration tools in FlashArray™ and FlashBlade® are more manual in nature today, our vision is for ActiveWorkload, ActiveCluster™, or ActiveDR™ to be triggered to move data to a more appropriate location by a Pure1 AIOps- discovered event.

Consolidation

Data Center Efficiency Maxim #2: Wanting to do more with less (hardware, energy, cost) is possible with the right integrated storage ecosystem.

Consolidation is not a new concept. Server virtualization’s ability to collapse numerous physical servers to virtual instances is similar to SANs consolidating multiple DAS-based workloads to a single array, while NAS enabled a similar consolidation for file-sharing needs. And, while most legacy vendors have merged SAN and NAS to a single platform, their efforts were more federated, meaning they adjusted the management plane to be able to address two separate systems. In other words, their consolidation was an optical illusion.

Unifying Data Access Systems

Do you still need dedicated systems if you’re using both block- and file-based workloads? FlashArray’s Purity//FA operating system allows block and file to coexist on the same array with no compromises—a truly unified storage message. It’s one thing to be able to provide both SAN and NAS from a single storage endpoint, but it also means the array needs to perform well for both services in tandem and have the ability to evolve with changing requirements. Purity//FA enables this and represents a true unification of functions. Many legacy vendors offer a federated approach – providing a single interface that manages two independent, purpose built systems.

Data Storage Media Efficiency

Pure Storage’s custom-designed DirectFlash® Module (DFM) outpaces traditional flash performance, it also combines incredibly dense capacity that outpaces traditional 2.5” and 3.5” drives sizes. In 2023, Pure Storage’s trajectory for DFM size landed at 75TB per drive. We recently announced a 150TB DFM and will deliver a 300TB DFM soon. These numbers are key to Pure’s commitment to providing incredibly fast storage with large capacities in a small amount of rack space. Less physical space doesn’t just mean a smaller footprint—the power and cooling savings are also massive. This is a great benefit, considering some estimate data storage to be 25% of the power and cooling costs of a data center.

A DFM’s size density is only one benefit. Purity views all installed DFMs as one universal pool – this allows it to control data distribution centrally and provide better flash level wearing. This approach is different from mass produced solid state drives (SSDs) that each have their own controllers to independently handle data block management, which greatly increases their aggregate power consumption while lowering their long term reliability.

legacy storage footprint pure storage footprint
Figure 1: DFMs can enable this amount of data center footprint reduction.

Autonomic Operations

Data Center Efficiency Maxim #3: A more autonomic data center reduces man-hours dedicated to manual processes and improves enterprise storage time to service.

The final efficiency variable is autonomic operations. Autonomic means administrators leverage automation for system management but are involved, and potentially intervene, in orchestration processes versus a fully automated system with no humans involved. Automation of routine processes through DevOps, scripting, and APIs is important, but ensuring admins can take action when needed can enable more efficient and critical management oversight. Autonomic operations can free up admin cycles to focus on other, more important tasks to drive bigger system improvements. This agility has a direct bearing on efficiency by moving IT operations closer to business outcomes.

Pure Storage supports autonomic operations in two dimensions. First, Purity//FA and Purity//FB were designed and have been evolved with an API-first strategy, meaning a new function gets an API created before it’s worked into the management interface. This enables their ability to be seamlessly integrated into a wide variety of usages—from traditional infrastructure management all the way to DevOps’ insistence on “infrastructure as code.”

The second dimension involves Pure Fusion™ and the vision it will represent as an extension to the control plane that will become a universal option for administering and managing a fleet of different endpoints: FlashArray, FlashBlade, Pure Cloud Block Store™, and beyond. The goal with this is to take the guesswork out of manually matching storage needs to a workload. Instead Pure Fusion, with Pure1 insights, could autonomically do it.

What Does All of This Mean?

The operational foundations of today’s data centers were created decades earlier with single-purpose storage supporting single-purpose needs. Because of this, many businesses with legacy storage systems are not streamlined and struggle to evolve their IT operations to ones that are efficient and agile. All of this applies to enterprise storage services, which in our data-centric world, are crucial for business success. While there’s no easy way to transform organizations, the Pure Storage platform can be a cornerstone to starting that evolution.

Get to Know the Pure Storage Platform: The First of Its Kind

Pure Storage recently announced that our portfolio of products has become a platform. We are a single, integrated ecosystem of software and hardware dedicated to simplifying the procurement, deployment, and management of enterprise storage services to support the ever-shifting horizon of workloads and data needs that are the lifeline of today’s businesses.

Hyper Efficient Data Center

Learn more about our vision and how we can accelerate efforts to make your data center hyper efficient.