7 Dangerous Assumptions About AI Security
Organizations worldwide are using RAG applications for internal use cases. The power of RAG applications stems from the use of proprietary data, which is inherently sensitive. To realize the benefits of AI investments, organizations have to invest in AI security. AI security is complex and it’s tempting to oversimplify it. Here are 7 dangerous assumptions that introduce security gaps and increase AI related risk:
Dangerous Assumptions #1 & #2: It’s Already Covered; AI is Just Another App
AI technology is as different to previous IT systems as Newton’s laws and quantum mechanics. Historical algorithmic decision systems were largely deterministic (rule based), but deep learning models, including LLMs and GenAI, are intrinsically probabilistic and, to a degree, unpredictable. The OWASP Top 10 for LLM Applications, for example, illustrates entirely new types of security risks and attack surfaces that are not limited to prompt injection or jailbreaking. Ongoing research continues to reveal new issues, such as emergent misalignment, where model alignment breaks down across domains as a result of fine-tuning, or where reasoning models don’t accurately explain how they reached conclusions. Although not all of the issues are limited to security, all of them can be real-world business risks.
Dangerous Assumption #3: Service Provider Security is Good Enough
Everyone in the field of cybersecurity understands the cloud shared security model. Isn’t it the same for AI? Cloud providers offer a range of security features, but they are only building blocks. Except for the service provider’s own infrastructure, security is up to the customer. Similarly, hosted LLM service providers include default AI guardrails, which are certainly designed to address risks to their businesses, while providing a degree of assurance to customers. In fact, every IT organization has learned—many the hard way—that cloud and SaaS security are different to on-premise security. In 2024, 44% of organizations experienced a data breach (up from 39% in 2023) and 82% of the data breaches were in cloud environments. The two main causes were misconfigurations and over-privileged accounts. AI systems are no exception. The difference is that the quantity and sensitivity of data used for internal AI use cases is higher.
Recommended by LinkedIn
Dangerous Assumptions #4 & #5: AI Security is a Product; Guardrails Can Fix It
Vendors marketing post-training AI guardrails, such as Nvidia Nemo guardrails, Guardrails AI, and Google Model Armor, may give the impression that guardrails are sufficient to ensure data security. It’s important not to take vendor marketing statements at face value. Some guardrails are implemented using deep learning classifiers, and approaches like LLM-as-a-judge are certainly helpful, but most guardrails are based on regular expressions or hand-crafted logic. This gives attackers a clear advantage because there is no limit to the possible inputs that might bypass a given set of guardrails. There are many other AI security products on the market, underscoring the fact that AI security is not a product.
Dangerous Assumption #6: The Only Users are Trusted Employees
Employees may be subject to employment agreements and to applicable laws or regulations. However, there’s a difference between prescriptive controls—thou must or must not—and descriptive controls that can be enforced before the fact. Data theft by employees remains a top cybersecurity risk. In 2024, 7% of data breaches were directly caused by malicious insiders, e.g., data theft, and insider threats accounted for 15% of all data breaches.
Dangerous Assumption #7: AI Security is a Separate Thing
The security of AI models and applications is only as good as an organization’s security posture. Limiting access to employees or to internal networks may reduce risk, but is meaningless without defense in depth (Cf. NIST SP 800-53 and ISO/IEC 27001). In 2024, 25% of data breaches were the result of successful system intrusions, including unauthorized network access. A substantial portion of data breaches are caused by stolen, compromised, or weak user credentials. Breaches can occur due to insiders abusing access and outside adversaries may recruit insiders. Adversaries use stolen credentials, which remain a top method access in data breaches, accounting for 16% of all data breaches in 2024. In addition to new, AI-specific vulnerabilities and attack surfaces, AI applications inherit the security strengths or weaknesses of their environment. AI security is only as good as the rest of security.
In summary, AI technology is different to legacy applications and entails additional business risks. AI security is not automatically covered by existing cybersecurity tools or teams. Since the advent of deep learning, AI has quickly evolved into a fundamentally new technology, not just another app. AI model developers do not guarantee security. As with any other cloud service, cloud providers cannot guarantee the security of customer applications or data. Companies that fine-tune models or use RAG with sensitive data assume the associated risks. Using internal data in AI models, RAG applications, or AI agents increases insider threat risks and risks associated with compromised credentials. AI security is not a product. Post-training guardrails will not fix everything; they are necessary but not sufficient. AI security cannot be separated from an organizations’ overall security posture. AI security requirements start with the fundamentals but go beyond traditional controls.
Cloud Fellow at Dito
3wNot to mention, I feel like people are creating products without enough security so then they can sell you security products to fix the security problems that they caused in the first place.
Technology & Business Polymath
3wPoints well taken, sir!