image_pdfimage_print

Delivering efficient protection and restoration of your critical data and applications is essential. And it’s necessary if you want to derive real business value from your protection data. But, stringent service level agreements (SLAs) and ever-shrinking recovery time objectives (RTOs) create tremendous challenges when your data-center strategy encompasses multiple platforms and technologies. To meet these requirements, more organizations are moving to hybrid and data-intelligent infrastructures.

Data-protection technologies and processes mean nothing without clear objectives. Be sure to establish and align the objectives with your business and IT objectives. And then measure and fine-tune to improve over time. Understanding real-world data protection and availability SLAs is increasingly critical. Not just for IT practitioners, but the technology companies that provide the tools.

Enterprise Strategy Group (ESG) recently surveyed nearly 400 IT professionals responsible for or involved in data protection for US and Canadian organizations. The goal: to better understand their ability to meet application and workload SLAs. ESG sought to get a clearer picture of the state of end-user deployments, identify gaps, and highlight future expectations. The study also evaluated tolerance for downtime, downtime metrics, and real-world SLAs in the context of actual data loss against the backdrop of availability technologies and methods, including hybrid environments.

I recently discussed the study with ESG Senior Analyst Christophe Bertrand. I learned that most organizations experience an SLA gap—the delta between their stated SLAs and ability to meet them—that greatly impacts their ability to meet RTOs. The impact reaches beyond IT operations to the business. Many participants who experienced application downtime also experienced:

  • Loss of revenue
  • Decline in customer confidence
  • Damage to brand integrity

Most organizations reported they couldn’t handle more than an hour of downtime for mission-critical apps. Yet the estimated mean time for recovery exceeded six hours.

The survey also revealed surprising trends around business-continuity and disaster-recovery testing. Only one in four organizations had implemented weekly testing. In fact, the majority of organizations tested monthly—or less often. With the frequency of cyberattacks, ransomware, and other infrastructure-crippling issues, it’s a dangerous gamble. More frequent testing allows you to identify and fix issues more quickly, refining your data-protection strategy as you go, before downtime becomes an issue. You certainly don’t want your first test of that strategy to be a live event that damages your business. 

Learn more in the full report, including: 

  • How cloud-based applications challenge traditional on-premises deployments in terms of recoverability experience and perception
  • Recovery point objective (RPO) disparities existing among software-as-a-service applications.
  • How data-protection, business-continuity, and disaster-recovery processes are becoming increasingly intertwined with the cloud.

Pure Storage has been named a Leader in Gartner’s recently published 2020 Magic Quadrant for Primary Storage. Gartner positioned Pure highest and furthest in both ability to execute and completeness of vision. We could only achieve these milestones through the commitment of our customers, partners, and the Pure team.