Pure Storage recently attended the AI Summit in London. It was encouraging to see a huge turnout at an event that brought together some of the biggest influencers and sharpest minds in the AI space. I was delighted to present lessons learned from machine learning deployments to the financial services sector. Additionally, Pure was recognised for its ground-breaking AIRI™ solution at the event’s AIConics awards, picking up the accolade for Best Innovation for AI Hardware. AIRI was awarded for being the industry’s first AI-ready infrastructure, architected by Pure Storage and NVIDIA to enable data architects, scientists and business leaders to operationalize AI-at-scale for every enterprise, getting projects up and running fast.
While showcasing AIRI and our other innovations at the show, and holding conversations with customers and partners, it truly feels that AI is coming of age.So why is it that AI is taking off in a meaningful way now? After all, it’s important to remember that AI is not a new concept, we’ve been talking about it and experimenting with it since the 50s, enduring numerous AI winters along the way. The simple answer is data. I’m reminded of a quote from Peter Norvig, an engineering director at Google, when he said that we don’t have better algorithms, we just have more data. Data, and the underpinning technologies that allow us to store and analyse information in real-time are the keys to unlocking the benefits of AI.
Data itself isn’t enough though, there are a number of best practices that need to be applied in order to arrive at functioning, beneficial AI. You need to ensure you’re architecting for data acquisition, cleaning, exploration, training and model validation. You should design infrastructure to scale with the sophistication of data pipelines and models. Finally, serve models at scale using best of breed tools that support your operations. If you’re venturing into AI for the first time, then it’s best to start small. Once you reach a point where you’ve proven your project or concept, then it’s time to scale up. If organizations do all of this and pay closer attention to data quality, prominence and labelling, they’ll have a much higher success rate in completing their individual AI missions.
At the current rate of advancement and innovation, I believe we’ll see general AI rather than narrow AI within the next 20 years. This would be a true intelligent machine able to learn, create and adapt to any situation, essentially being capable of all the cognitive functions a human brain is. We’re already seeing fantastic success right now with what we call narrow AI. Put simply, narrow AI is targeted at performing a single, or limited number of specific tasks, usually in a certain industry vertical. This could be anything from a self-driving car to machines capable of debating with a human.
An even better example would be in healthcare, where Paige.AI is pioneering the use of AI to fight cancer. With access to one of the world’s largest tumour pathology archives, Paige.AI is focused on revolutionizing clinical diagnosis and treatment in oncology. Today, pathological diagnoses rely on manual, subjective processes developed more than a century ago. Paige.AI aims to transform the pathology and diagnostics industry from highly qualitative to a more rigorous, quantitative discipline, using AI to guide pathologists, clinicians and researchers. The result should be better, quicker diagnoses, and a better survival rate for patients.
The applications of AI are clearly limitless, and general AI will open up even further possibilities in the future. It’s staggering how far we’ve come in a relatively short space of time. If you had asked me 20 years ago whether I thought we would achieve this level of AI, I would have said yes, but not in my lifetime. As it stands today with the industry pulling in the same direction and solutions such as AIRI publicly available, real-world AI is truly here.