See It, Hack It, Sort It: Protecting AI with Open Source Security

See It, Hack It, Sort It: Protecting AI with Open Source Security

The rapid advancement of AI brings immense opportunities—but also unprecedented security challenges. At the State of Open Conference 2025, Marcus Tenorio , Security Engineering Manager at ControlPlane , took the stage to explore how open-source security tools can protect AI infrastructure.

Why AI Security Matters More Than Ever

AI systems increasingly rely on GPUs and cloud-native architectures, making them vulnerable to new forms of attacks. Model poisoning, GPU exploits, and AI infrastructure vulnerabilities are evolving threats that demand urgent attention. How do we defend against them?

1. See It: Understanding AI’s Security Landscape

🔹 Model Poisoning – Attackers manipulate AI models, leading to biased or harmful outputs. In healthcare, this could mean misdiagnosing cancer, with life-threatening consequences. 🔹 GPU Exploits – GPUs are not just computing powerhouses—they’re also attack surfaces. Memory attacks and side-channel exploits can leak sensitive data. 🔹 Cloud-Native AI Risks – AI runs on Kubernetes, cloud platforms, and distributed systems. Attackers exploit weak configurations and unmonitored resource access.

2. Hack It: How AI Systems Get Compromised

To defend AI, we need an offensive mindset—understanding how attackers think.

🔸 Reconnaissance – Attackers scan AI systems for vulnerabilities, exploiting cloud misconfigurations. 🔸 Training Data Poisoning – Malicious actors alter datasets to manipulate AI decision-making. 🔸 GPU-Based Attacks – AI workloads introduce new security blind spots, making it essential to monitor GPU activity.

3. Sort It: Defending AI with Open Source Security

Open-source security tools are essential in securing AI infrastructures. Projects like Falco and Cilium help detect and respond to AI threats.

🔹 Falco: Real-time threat detection for AI workloads—monitoring system calls, GPU activity, and anomalies. 🔹 Cilium: Kubernetes-native security that controls network access and prevents unauthorized AI model tampering. 🔹 Observability & Metrics: AI security starts with monitoring energy consumption, GPU heat levels, and workload patterns—unusual spikes can signal an attack.

The Future of AI Security is Open

As AI adoption grows, security cannot be an afterthought. Open-source security tools offer transparency, adaptability, and collective intelligence—essential for staying ahead of AI threats.

🚀 Join the conversation. Secure AI the open-source way.

#OpenSource #StateOfOpenCon #SOOCon25 #OpenUK

OpenUK

Marcus Tenorio

Engineering Manager at ControlPlane

2mo

Thanks so much for that, Nikita!

Like
Reply

To view or add a comment, sign in

More articles by Nikita Koselev

Insights from the community

Others also viewed

Explore topics