How the EU AI Data Act Could Change AI Investment Strategies

The new EU AI Act is the world’s first comprehensive legal framework on AI. Here’s a look at what it is, what it means for organizations, and how storage as-a-service can help organizations overcome challenges and large upfront investments.

EU AI Data Act

image_pdfimage_print

AI promises to change the way we do everything. In fact, it’s already well on its way to fulfilling that promise. 

The problem is that it’s also opening a somewhat predicted Pandora’s Box of challenges and pitfalls, some of which require much closer scrutiny from powerful bodies capable of preventing potentially catastrophic damages to both companies and their customers and ensuring AI fulfills its promises and makes things better without simultaneously also making things worse. 

Enter the world’s first comprehensive set of AI rules, the European Union’s new AI Act

Read on to learn what the AI Act means for enterprises and how it might force them to rethink infrastructure strategies in order to stay ready for what’s next and avoid risky investments with uncertain ROI. 

What Is the EU AI Act?

The EU AI Act is the first-ever legal framework on AI, setting a benchmark for the rest of the world. The new rules will affect companies globally, not just in Europe. 

Its purpose is to give AI developers and deployers clear guidelines regarding specific AI uses while seeking to reduce administrative and financial burdens for businesses, especially small and medium-sized enterprises. 

The new rules aim to foster trustworthy AI in Europe and beyond by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing the various risks of very powerful and impactful AI models. Although most AI systems pose little risk and solve many issues, certain systems can present unacceptable risks that need to be addressed from the ground up.

Per Belgian Digitization Minister Mathieu Michel: “Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR.”

How Will the EU AI Act Affect How AI Systems are Developed and Deployed?

The Act aims to ensure all AI applications are safe and secure by subjecting higher risk applications to assessments, approvals, and ongoing monitoring—both before and after an application has been deployed in the market. This has broad implications for organizations investing in AI, from the additional manpower needed for oversight and documentation to the capabilities required of the underlying infrastructure and data sets.

First, AI systems are categorized by risk levels:

pyramid showing the four levels of risk: Unacceptable risk; High-risk; limited risk, minimal or no risk
Source: European Commission

Per the European Commission: “All AI systems considered a clear threat to the safety, livelihoods, and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.”

High-risk applications are defined as those used for:

  • Critical infrastructures (e.g., transportation)
  • Educational or vocational training
  • Safety components of products 
  • Employment or management of workers and access to self-employment 
  • Essential private and public services 
  • Law enforcement that may interfere with people’s fundamental rights 
  • Migration, asylum, and border control management 
  • Administration of justice and democratic processes 

Furthermore, per the act, all AI systems considered “high risk” will be subject to strict assessments before they will be allowed to be released to the market. In order to conform, these high-risk applications must have:

  • Adequate risk assessment and mitigation systems
  • High-quality data sets feeding the system
  • Activity logging for transparency and traceability 
  • Detailed documentation providing all information necessary on the system and its purpose
  • Clear and adequate information for the deployer
  • Appropriate human oversight measures 
  • High level of robustness, security, and accuracy

Then, there’s the process an organization must undergo to get clearance to deploy the “high-risk” AI application in market. Note: The process must be repeated if significant modifications are made down the line.

step-by-step process for declaration of conformity
Source: European Commission

Per the European Commission: “Once an AI system is on the market, authorities are in charge of market surveillance, deployers ensure human oversight and monitoring, and providers have a post-market monitoring system in place. Providers and deployers will also report serious incidents and malfunctioning.”

This translates to additional man hours and even greater investment—a daunting reality when many are weighing the immense costs of AI with its potential for ROI.

What About Opt-Opt Mechanisms?

The AI Act contains a provision equating AI and machine learning with “text and data mining” (TDM) under the EU Text and Data Mining Directive. Consequently, the use of “machine learning” is only allowed if:

  1. The person programming the machine-learning functionality has had lawful access to the content for the purpose of text and data extraction; and
  2. The owner of the copyright and related rights and/or the database owner have not expressly reserved the extraction of text and data—the so-called “opt-out” mechanism.

This opt-out function could significantly limit data used for training AI models. 

Although it’s not yet clear how the opt-out function would work, legally, and no case law or other authoritative document exists to determine the sufficiency of such statements, the implementation of the EU AI Act may increase the trend of companies preventing use of their data in AI models. 

Organizations will need to confirm that data used in machine learning models has no access restrictions. It could prove to be a prime use case for retrieval-augmented generation (RAG) pipelines, which allows LLMs to access specific, proprietary data sources vs. relying on swaths of public data.

Minimizing Risk of Investments with Evergreen//One/STaaS

For organizations already weighing AI’s and costs with its potential for legitimate business use cases, the EU AI Act could give more reason to pause. For many, the clearest path forward is derisking the investment in AI as much as possible before the ROI becomes more apparent. From an infrastructure standpoint, adopting AI on a flexible, as-a-service platform could be the least risky approach vs. heavying up storage, networking, and compute in the data center only to find the risk was not worth the reward.

Written By: