Today, every business is powered by software, and having a DevOps strategy is key to innovation and developer productivity. Pure can help. A data-centric architecture from Pure Storage ...
We are heading into the Summer of new product introductions from Big Storage (primarily via Las Vegas).
Anyone who has been in the Storage infrastructure world understands that product transitions are tough; for vendors as well as for IT buyers. The true cost of these transitions goes beyond just the wasted investments in obsolete products. At every new product introduction, customers are expected to upgrade to new technology; however, new storage platforms almost always require forklift upgrades due to architectural incompatibilities between generations. Similarly, some scale-out architectures can’t add newer nodes (say with higher capacities) into existing arrays or clusters, also triggering forklift upgrades.
STOP: If you’d rather watch an illustrative video that includes laser shooting forklift robots – then by all means click, otherwise keep reading…
Down the Slippery Slope
That brand new Big Storage array starts out working as advertised but eventually performance and scale limitations as well as failure rates take their toll. Of course, Big Storage knows this – and so they raise maintenance rates for years 4 and beyond. Any customer that wants to keep running that array faces a big penalty – maintenance extortion.
Surprise! The Forklift Upgrade
Big Storage gives customers a choice: “Don’t purchase the expensive maintenance and instead put the money towards a brand new, expensive array.” Expensive is the common thread here, and buying the new array typically means re-purchasing all of the hardware and the software in the “old” array. There is some consolation for customers – they get new features, performance, and scale with their new equipment. Unfortunately, paying for the array is only the beginning – all of the data and applications need to be migrated from the old array to the new.
New equipment will need to be rolled into a customer’s data center via forklift, where it needs to be racked and stacked, cabled, zoned, tested, poked, prodded, and cajoled to be ready to host the production workloads from the old array.
Next, the storage team needs to negotiate and plan with a wide range of infrastructure, application, and line of business stakeholders to find the least worst times to actually perform the data migration for each application. There will be many questions from those stakeholders – and no easy answers. Storage teams have a tremendously tough, and thankless job here.
The actual migration can be done non-disruptively using various software tools and techniques; however, while many applications allow for non-disruptive migrations, many array-based features like snapshots, replication, and clones fueling business processes are not included with application level migration technologies. Migrations are very complex, and given the risks of potential disruption and data loss the safe practice is to shut down the affected application and incur scheduled downtime. And when does this typically happen? Not surprisingly, nights and weekends – not so great for the storage team’s family time. And migrating data is in many ways like rolling the dice – the result will probably be OK, but every now and then, disaster strikes.
Once the migration has finished, the hosts and applications are brought back online. The old array needs to be kept around and used to roll-back any applications that have trouble on the new array. Effectively, the storage team has two arrays to own and manage, not one. Eventually the old array can be fully decommissioned and wheeled out on another forklift.
We routinely hear about migration horror stories, like a major telco that spends 18 months of migration planning and effort for every 2 years of production. And a global financial services firm has a team of 30 admins dedicated to data migrations 365 days a year. Sound painful and expensive? It is. But we’re done, right?
The Forklift Upgrade Strikes Again
A new array resets the clock and storage teams can go back to the day to day of disk-based storage management. But time moves quickly and after another 2-3 years Big Storage comes calling again: Time for another forklift upgrade!
That’s right, repurchase the entire array, along with any new capacity installed along the way. Since most customers are growing their usable storage by >20% annually, this means the cost and risk of replacing the array is growing also. Those dice are looking scarier all the time.
A Cycle of Risk, Expense and Waste…Forever
Is there a happy ending here? I wish I could say yes. Unfortunately, this process simply continues on indefinitely and it is absolutely, 100%, a colossal waste of time, money and resources.
For CIOs: Imagine what those resources could have achieved if put to strategic use instead. Greater competitiveness? Faster time to market? New, innovative applications? More and happier end customers?
What If There Was a Better Way?
The picture I’ve painted is not pleasant, yet it is the way the storage industry has worked for decades. Big Storage has held all the cards and customers simply had no alternative, and so on and on it went.
What if there really was an alternative? A way to escape the downward spiral and avoid the forklift upgrade gauntlet? How would that feel? What would it look like?
We invite you to visit us at Transformation2, and get prepared to find out. Because once you do, and once you experience it, we’re pretty sure you’ll never want to go back. Ever.