It is increasingly clear that companies with more diverse workforces have better business results. Studies show that when companies commit themselves to diverse leadership, they are also more ...
So you’re a technology company – now what?
In the first installment of this series, The Digital Goldrush, we discussed the monumental shift to digital and what that shift means for the evolution of business. As digital and data team up to drive new business models, a fresh series of infrastructure challenges have emerged. To turn those challenges into opportunities for innovation requires a deeply analytical understanding of all the tools available today, and when to use them.
For modern organizations, cloud should be at the center of technical strategy. The agility and accessibility of the cloud have permanently transformed customer expectation around storage infrastructure. Vendors in SaaS, public cloud, private cloud and traditional on-prem have risen to the challenge with a host of new interconnected tools and integrated functionality built for today’s increasingly complex workloads.
While public cloud captures a lot of headlines today, 43% of IT decision makers say they have moved a workload from on-prem to public cloud and back, citing a wide range of factors. We anticipate a multi-cloud future – one in which organizations strategically deploy specific toolsets and platforms to accommodate varied workloads. This requires a review and analysis of each workload across four key considerations – cost, compliance, security and performance.
Cost is a complex piece of the puzzle. For a small start-up, public cloud might be the most cost-effective option. Public cloud is agile, easy to manage and easy on CapEx. But as an organization grows and scales, public cloud cost becomes difficult to control and predict. Public cloud has spurred the creation of dedicated cloud-cost management startups and consulting firms – one such vendor starts with pricing at 3% of monitored cloud spend.
Conversely, a traditional on-prem solution is likely cost-prohibitive for that same small startup, but as the company grows and its workloads diversify, it makes fiscal sense to have ownership of its infrastructure for some of those workloads, while some remain in the public cloud. For example, many predictable workloads may cost less to “own” on-premises, while some workloads that are bursty or need flexibility may “rent” for less in the public cloud.
The common denominator that determines an effective compliance strategy is data. Data sovereignty, processing, storage, retention and availability all are factors that need to be considered when it comes to think about how your application handles data. IT pros need to ensure their workloads and architecture are built to respond dynamically when compliance standards inevitably change. Data sovereignty and related compliance requirements continue to evolve and the effort required to design and then shift towards these new regulations becomes harder.
Information security continues to be a challenge for businesses, with reports of major breaches discovered on a regular basis. The recent Equifax breach touched tens of millions of consumers and resulted in a complete leadership overhaul internally – a result that won’t go unnoticed by other C-level execs when making their decisions about security.
The workload dilemma is not as much about where the data belongs – whether public cloud or on prem – but how the most critical and sensitive segments of that data are secured. Cyber security and threat vectors continue to make application development more complex. The issues become more complex as areas such identity management, vulnerability scanning, patching and security monitoring require forethought and strategic implementation.
While the public cloud has made leaps and bounds in performance over the past 2-3 years, bleeding-edge on-prem solutions will always be several steps ahead. Ultimately, the public cloud runs on physical storage arrays – so quite simply, it will never be as fast or available as the arrays on which it is run.
Today, on-prem delivers 99.9999 percent availability, while the major public cloud providers are only on the hook for 99.99. Again, this may only matter for some workloads – if an R&D lab, for example, goes offline for a day, the employees go home and show back up for work tomorrow. Operations aren’t disrupted. But that same outage for a mission-critical, customer-facing application could result in the loss of money, customers and brand value.
Workload segmentation is poised to become increasingly important, particularly as data-intensive, next-generation workloads like artificial intelligence, deep analytics and machine learning are democratized.
In the next segment, we’ll discuss how to take back control of your environment, and the myriad ways in which organizations glean increasingly critical value from data. And if you’re a CIO with an interesting hybrid story or challenge, we’d love to hear from you.