8 Challenges with AI for Data Security

In cybersecurity, AI can help us, but it can lend a hand to the bad guys, too. Here are seven things to consider when leveraging AI for security and what you need to solve them.

Challenges with Applying AI to Data Security

image_pdfimage_print

As AI makes headlines and augments the way we work, it’s best to be cautiously optimistic when it comes to cybersecurity. Can it lend a hand? Certainly. 89% of security leaders have indicated that AI is important to improving their security posture by 2025, up from 79% last year. The majority of security leaders believe AI is the emerging innovation that offers the greatest potential to boost cybersecurity. Respondents were hopeful in AI’s ability to “self-react to threats,” “streamline risk evaluation,” “enhance efficiency and accuracy,” “enable prompt issue resolution,” and “leave more time and energy for our teams.”

But AI can lend a hand to the bad guys, too. It’s lowering the barrier of entry, sometimes literally, as threat actors leverage AI to help them identify vulnerabilities and exploit paths faster to launch attacks or breach your defenses. 

As a result, enterprises allocated 29% more budget for new, innovative, and experimental security solutions this year, up from 11% to 15% of total budget. Security leaders expressed interest in the potential of emerging technologies, such as AI, zero trust network access, blockchain, and machine learning to “stay ahead of threat actors.”

Then, there are heightened data privacy concerns with AI around consent, transparency, and control of personal information. It’s a double-edged sword, and as we learn to wield it, here are some things to consider along the way.

1. AI won’t eliminate the human element from cybersecurity.

AI can solve some problems faster than humans alone, but it’s not hands-off yet. As long as humans play a role in its implementation, the human element can introduce threats and vulnerabilities AI can’t solve for. (In some cases, such as training models, they can even be influenced by error.) 

Worse: AI may even make managing the human element more challenging. As AI-based or AI-generated attacks increase, training people to identify them and how to apply security becomes more and more difficult.

2. There isn’t a silver bullet in security, and AI’s false sense of security can be risky.

Security is a never-ending, always-evolving landscape. Just like a security-savvy enterprise, attackers and malware developers make it their business to modernize, update, and evolve their tools every single day. While AI can be a tool in your toolbox, it’s also in theirs. 

Enterprises need to exercise the same vigilance, adopting new tools and technologies, and applying a many-layered approach to stay up to date. AI should be part, but not all, of that effort.

3. AI is currently more reactionary than proactive. 

The very nature of learning models and pattern recognition is reactionary—it’s a powerful tool when it comes to anomaly detection and spotting behavioral outliers that can indicate a breach or lurking attacker. However, its application is often labeled as proactive. 

Ample, proactive tech and processes in your security stack help you to build a better defensive posture that’s crucial when attackers are always learning new ways to evade this reactive detection. 

Read more: 5 Ways to Address Security Gaps Before an Attack >>

Reactive is important, but a security strategy has to cover the before, during, and after of a breach to be effective. That means good data hygiene, patch management, multifactor authentication, fast analytics of consistent logging, and plenty of training and tabletop exercises to ensure recovery objectives are thoroughly tested and can be met.

New: Now you don’t want to just be reactive, so let’s talk about being proactive. With our platform, we offer the most flexibility and agility to a customer in the most proactive way. 

ANNOUNCEMENT: To bring further peace of mind, with our New Resilience SLA, we will collaborate with you to build and maintain a comprehensive data protection and cyber security strategy. We’ll have ongoing quarterly reviews to ensure best practice adherence ongoing vulnerability remediation and more. 

We will partner with you to make sure you have everything you need to eliminate any liabilities or exposure in your storage.

4. Pattern recognition that actually works requires models to change.

AI is advanced pattern recognition. But what if any outliers trigger responses because the models don’t evolve?

Here’s the problem: Users on an enterprise network are generally quite predictable. Models learn this over time, but this means any irregularities can actually introduce problems into the security environment rather than solving them. (Say, a system that reacts and modifies firewall rules on the fly, locking legitimate users out of a system.) 

There still needs to be a solid degree of hands-on review (and experts to review it, and reams of data) to sift out false positives while also keeping an eye on the model over time.

5. To avoid “automation without oversight,” AI at high levels of networking and security should always undergo “supervised learning.”

Putting too much trust in AI as part of a security system that can influence and change could surface key concerns around AI and its tendency toward bias, hallucinations, and error. By design, these systems can make up new yes/no rules on the fly based on what they’re seeing as “patterns” or “outliers.” That means, it could make changes you don’t intend, oversecure a system, or undersecure a system. (A slight anomaly in user behavior could cause a lockout—even out of your physical office building.)

These types of model-based adjustments could be even harder to determine, locate, or reverse than a human one. 

A bigger problem: If your permissions or access rules are complex, the AI might be unable to distinguish between exceptions and anomalies.

6. Human intervention will always be critical to instruct what is bad or good. (Or off limits.)

Supervised learning reintroduces the human element into the models, which takes us back to point #1: Humans need to be a part of the equation, and they need to be able to interact with the AI as users and one of the most important aspects of security. Users and AI in security have to go hand in hand.

What is a good result? What is a bad result? When should the AI take action? What data is off-limits? While it’s not wise to let it run entirely on its own, you do run the risk of reintroducing bias or risk the more you intervene. 

We can’t fully rely on AI or assume it’s giving us a complete and accurate picture. A human will have to make sense of the information to make the right decision, cautiously.

7. Anomaly detection at scale requires immense amounts of data to get right.

The data required to execute advanced pattern recognition is immense. To find that needle in the haystack, data must come from a multitude of sources. It’s not just the volume but also the breadth from which the data must be gathered and analyzed. 

Too many enterprises underestimate this, making AI as a security tool far from turnkey. Data platforms must offer incredible visibility, simplicity, and scale to meet these demands—without compromising privacy of sensitive information, be it personal or proprietary.

I’m announcing today a new AI-Powered Security Assessment to enhance your operational security posture in storage management.

(thunderous applause)

Our new Security Assessment provides deep visibility into fleet-level operational security risks. Leveraging aggregated intelligence from across 10k environments, the assessment will present a numerical security score from 0-5 to benchmark the security posture of the entire fleet. It will then also provide actionable recommendations and best practices that align to the NIST 2.0 standards

*Automation/DR Testing – the real opportunity for alleviating burden of downtime

ANNOUNCEMENT: New AI-Powered Anomaly Detection (unveil to thunderous applause”

The New AI-Powered Anomaly Detection Enhancement discovers threats such as ransomware attacks, unusual activity, malicious behavior, and Denial of Service attacks by identifying performance anomalies. So customers can identify the last known good snapshot copy to mitigate operational impact by quickly identifying recovery point targets to restore data, reducing risk and guesswork. 

This means that you can identify and detect issues quickly, thus making sure you’re ready if something happens. 

8. Data privacy policies will need to evolve.

Compromises to data privacy has been enterprises’ key concern with generative AI, and with good reason. If an AI system requires access to sensitive data to learn and make decisions, it can raise the risk of unintentional data exposure. AI can also inadvertently reveal patterns or insights about individuals that were not intended to be disclosed, leading to privacy violations. 

The complexity of AI algorithms can also make it difficult for enterprises to fully understand or control how data is being used and processed. This opacity can hinder compliance with data protection regulations, like the GDPR or CCPA, which demand transparency in data processing activities. For enterprises, this will necessitate a robust framework for data governance, ethical AI use, and compliance with evolving data privacy laws.

Customer Journeys to AI Success
Logo - Pure Storage - White - Cropped

AI Will Be Complex. Make Your Data Platform Simple.

While AI may not solve all data security concerns, a data storage platform designed for resiliency and visibility can—especially one that delivers on recovery before and after AI picks up the scent. 

Enterprises allocated 29% more budget toward new, innovative, and experimental security solutions this year. But it’s important not to overlook ground zero for protecting data, and Pure builds this degree of innovation and security into every aspect of its data storage platform.

“Better security solutions” were reported as one of the top five unaddressed challenges this year for enterprises to “save time, resources, and improve accuracy.”

Pure responds rapidly to threats. We power analytics and AI platforms for higher volume ingest and correlation to arm threat hunters and forensics with speedy insights so you can contain attackers before they plant malware. We also secure data with full immutability, and protect data with strict access controls and granular, rapid data recovery

Discover Pure Storage Disaster Recovery as-a-Service and our Ransomware Recovery SLA that guarantees shipment of clean arrays for recovery after an attack—just two ways you can ensure resiliency. And, if you are adopting and training AI models for security or otherwise, Pure Storage can also support you with storage optimized for AI workloads. As you integrate new data sources, new operational databases, new transformation workflows, and more, you may find you need storage requirements you didn’t account for. An agile storage system like Pure Storage can support such evolving demands, with strong integrations with all hardware and software solutions in the AI ecosystem.

Written By: