Summary
The 2024 European Commission’s AI Act seeks to ensure AI use cases and applications stay “trustworthy.” Leveraging technology solutions that help you optimize your AI investments in an ever-evolving regulatory environment can help you build a brighter future.
1984.
The year a Ridley Scott-directed Apple commercial alluding to George Orwell’s noted dystopian 1949 novel, Nineteen Eighty-Four, introduced the first Apple Macintosh personal computer and also the year The Terminator hit the big screen with an alarming, machine-run vision of the year 2029.
Exactly 20 years after the release of The Terminator, the 2024 European Commission’s AI Act has arrived, looking to establish a somewhat similar set of guidelines to those laid out in I, Robot.
Twenty years later, in 2004, the movie masterpiece I, Robot came out, skirting around Isaac Asimov’s “Three Laws of Robotics”:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Fiction? Yes. But also uncannily prophetic.
So—what’s all the fuss about?
What is the EU AI Act?
The AI Act is an attempt to limit AI use cases and applications to ensure they stay “trustworthy” and respect “fundamental rights, safety, and ethical principles”—something that’s frankly hard enough with human intelligence, let alone artificial.
While the big-screen fictional examples I gave paint an overtly bleak picture of an AI-based Armageddon of rogue robots, it’s precisely this kind of vision that has many people inside and outside of the tech industry justifiably worried.
Elon Musk was one of tens of thousands of signatories to the much-publicized open letter calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
That was March 2023. There has been no such pause, and since then, Musk has gone on to tell the Wall Street Journal CEO Council Summit that “AI had the potential to assume control of humanity. It’s a small likelihood of annihilating humanity, but it’s not zero.”
The AI Act outlines four levels of “risk” posed by AI systems: unacceptable, high, limited, and minimal. Rightfully, the Act declares an outright ban on any system considered a clear threat to the safety, livelihoods, and rights of people.
Sounds reasonable. Nobody fancies Armageddon, thank you. But the Act then goes on to give some examples, one of which is “social scoring by governments,” which is probably a nod to some of the systems purported to be used in China that “focus primarily on economic activity in commerce, government affairs, social integrity, and judicial credibility” of citizens.
This is where it could all get a bit complicated. Many of these types of systems will be created for purposes other than an obvious danger to human life. Consider a bank’s use of AI in its credit rating system, used to determine (without a human’s oversight) your suitability for a loan, or a supermarket targeting precise demographics of consumer behavior and personal attributes, such as size, weight, ethnicity, sexual orientation, political persuasion, and so on, to favor product choices and discounts to specific individuals. Also think remote biometric identification (i.e., Mission Impossible-style facial recognition systems). These are all considered high risk and, in principle, prohibitable.
The question is: Where does this leave businesses? Because it would seem that outside of a chatbot to help your consumers with how to operate their new microwave oven or to process a product return because the shoes they ordered don’t fit, there are an awful lot of systems that could fall into what the EU Act defines as “high risk,” and thus require registration, oversight, and regulation.
If you’re considering an AI system, or already deploying AI, you can run some details through the EU AI Act Compliance Checker tool to determine your risk rating.
Once you’re clear on the risk rating of your AI solution, your next task will be to build a business case around your AI investment, as the EU AI Act can compound challenges to your business operations by requiring compliance in an ever-evolving regulatory environment.
While you’re building out your business case, consider the analysis by David Cahn of Sequoia Capital. Cahn describes the gap between the revenue expectations implied by the AI infrastructure build-out and actual revenue growth in the AI ecosystem as equating to around $500 billion.
That’s a pretty big gap.
Cahn’s analysis is based on the huge GPU CAPEX spend by the large hyperscalers and the minimal revenues generated so far from their AI services.
How Can You De-risk Your AI investments?
You could use existing LLMs for your GenAI projects. This will save you a lot of model training cost and time, but it’s of limited use if it’s not relevant to your business or customers. You may also want to consider deploying a retrieval-augmented generation (RAG) solution, which will help enhance an existing LLM with your own data and information sources. You’ll want to ensure your data infrastructure can feed the hunger of your RAG solution, but investment in your storage platform is now playing second fiddle to spend on “AI” projects.
Pure Storage® Evergreen//One™ is the best way to satisfy that hunger. Why? Because it dramatically minimizes your initial storage spend, which is highly likely to be an unknown quantity with unknown performance requirements and unknown growth.
Evergreen//One lets you pay only for what you consume. It lets you vary your consumption and adjust your performance and SLAs, and ultimately, it will de-risk your AI project and improve your ROI. Since you will likely be deploying your AI apps and components via Kubernetes and containers, you’ll also want to deploy Portworx® to enable you to manage your container storage, orchestrate it, and ensure it’s protected across your on-prem and hyperscaler clouds.
Now all you need is to get your hands on some GPUs and you’ll be fully prepared to not just manage but optimize your AI investments in the age of AI regulation. That bleak picture painted by those movies may not come to pass in the end, thanks to technologies that allow us to control how AI controls us.
Written By:
A Brighter Future
De-risk your AI projects with storage as a service.