7 Dangerous Assumptions About AI Security

7 Dangerous Assumptions About AI Security

Organizations worldwide are using RAG applications for internal use cases. The power of RAG applications stems from the use of proprietary data, which is inherently sensitive. To realize the benefits of AI investments, organizations have to invest in AI security. AI security is complex and it’s tempting to oversimplify it. Here are 7 dangerous assumptions that introduce security gaps and increase AI related risk:

  • Dangerous Assumption #1. Existing cybersecurity people, processes, and technology already cover or can easily cover AI models and applications without significant changes or new investments.
  • Dangerous Assumption #2. AI models and applications are “…just another app” that can be secured in the same way as existing on-premise, cloud, or SaaS applications.
  • Dangerous Assumption #3. AI model vendors and hosted model providers include sufficient security as a part of their services.
  • Dangerous Assumption #4. If there are any security gaps, they can be covered by adding new products to existing tooling. In other words, “…AI security is a product.”
  • Dangerous Assumption #5. There’s no need to worry about end-to-end security for internal RAG applications because “…post-training guardrails fix everything.”
  • Dangerous Assumption #6. For internal RAG applications, there’s nothing to worry about because “…the only users are employees.”
  • Dangerous Assumption #7. AI security is “a separate thing” that can be handled in isolation, without regard to any gaps in existing cybersecurity people, processes, and technology.

Dangerous Assumptions #1 & #2: It’s Already Covered; AI is Just Another App

AI technology is as different to previous IT systems as Newton’s laws and quantum mechanics. Historical algorithmic decision systems were largely deterministic (rule based), but deep learning models, including LLMs and GenAI, are intrinsically probabilistic and, to a degree, unpredictable. The OWASP Top 10 for LLM Applications, for example, illustrates entirely new types of security risks and attack surfaces that are not limited to prompt injection or jailbreaking. Ongoing research continues to reveal new issues, such as emergent misalignment, where model alignment breaks down across domains as a result of fine-tuning, or where reasoning models don’t accurately explain how they reached conclusions. Although not all of the issues are limited to security, all of them can be real-world business risks.

Dangerous Assumption #3: Service Provider Security is Good Enough

Everyone in the field of cybersecurity understands the cloud shared security model. Isn’t it the same for AI? Cloud providers offer a range of security features, but they are only building blocks. Except for the service provider’s own infrastructure, security is up to the customer. Similarly, hosted LLM service providers include default AI guardrails, which are certainly designed to address risks to their businesses, while providing a degree of assurance to customers. In fact, every IT organization has learned—many the hard way—that cloud and SaaS security are different to on-premise security. In 2024, 44% of organizations experienced a data breach (up from 39% in 2023) and 82% of the data breaches were in cloud environments. The two main causes were misconfigurations and over-privileged accounts. AI systems are no exception. The difference is that the quantity and sensitivity of data used for internal AI use cases is higher.

Dangerous Assumptions #4 & #5: AI Security is a Product; Guardrails Can Fix It

Vendors marketing post-training AI guardrails, such as Nvidia Nemo guardrails, Guardrails AI, and Google Model Armor, may give the impression that guardrails are sufficient to ensure data security. It’s important not to take vendor marketing statements at face value. Some guardrails are implemented using deep learning classifiers, and approaches like LLM-as-a-judge are certainly helpful, but most guardrails are based on regular expressions or hand-crafted logic. This gives attackers a clear advantage because there is no limit to the possible inputs that might bypass a given set of guardrails. There are many other AI security products on the market, underscoring the fact that AI security is not a product.

Dangerous Assumption #6: The Only Users are Trusted Employees

Employees may be subject to employment agreements and to applicable laws or regulations. However, there’s a difference between prescriptive controls—thou must or must not—and descriptive controls that can be enforced before the fact. Data theft by employees remains a top cybersecurity risk. In 2024, 7% of data breaches were directly caused by malicious insiders, e.g., data theft, and insider threats accounted for 15% of all data breaches

Dangerous Assumption #7: AI Security is a Separate Thing

The security of AI models and applications is only as good as an organization’s security posture. Limiting access to employees or to internal networks may reduce risk, but is meaningless without defense in depth (Cf. NIST SP 800-53 and ISO/IEC 27001). In 2024, 25% of data breaches were the result of successful system intrusions, including unauthorized network access. A substantial portion of data breaches are caused by stolen, compromised, or weak user credentials. Breaches can occur due to insiders abusing access and outside adversaries may recruit insiders. Adversaries use stolen credentials, which remain a top method access in data breaches, accounting for 16% of all data breaches in 2024. In addition to new, AI-specific vulnerabilities and attack surfaces, AI applications inherit the security strengths or weaknesses of their environment. AI security is only as good as the rest of security. 

In summary, AI technology is different to legacy applications and entails additional business risks. AI security is not automatically covered by existing cybersecurity tools or teams. Since the advent of deep learning, AI has quickly evolved into a fundamentally new technology, not just another app. AI model developers do not guarantee security. As with any other cloud service, cloud providers cannot guarantee the security of customer applications or data. Companies that fine-tune models or use RAG with sensitive data assume the associated risks. Using internal data in AI models, RAG applications, or AI agents increases insider threat risks and risks associated with compromised credentials. AI security is not a product. Post-training guardrails will not fix everything; they are necessary but not sufficient. AI security cannot be separated from an organizations’ overall security posture. AI security requirements start with the fundamentals but go beyond traditional controls.

Not to mention, I feel like people are creating products without enough security so then they can sell you security products to fix the security problems that they caused in the first place.

Like
Reply
Jourdan Clish

Technology & Business Polymath

3w

Points well taken, sir!

Like
Reply

To view or add a comment, sign in

More articles by Ron Herardian

  • Defense Against the Dark Arts: Distillation and Data Theft

    Knowledge distillation,[1] a technique that creates smaller, more efficient “student” models from larger, more complex…

    1 Comment
  • AI Models: You Break it, You Buy It

    Lawmakers across the globe have taken steps to ensure that AI models are secure, safe, trustworthy, fair, and…

    1 Comment
  • Open Source AI vs. AI Model Openness

    A single definition of open source AI is problematic because, unlike open source software, AI models include non source…

  • Can the AI Industry Regulate Itself?

    AI regulation in the United States lies at the intersection of existing federal laws, a growing patchwork of state…

  • Why Containers Will Replace VMs

    Sun Microsystems introduced containers more than ten years ago, but containers have recently become the hottest trend…

    25 Comments

Insights from the community

Others also viewed

Explore topics