JP Morgan Chase CISO's Security Wake-Up Call: Security should be baked in, not bolted on.
When the CISO of a bank that spends $17 billion annually on technology speaks, the industry listens. Pat Opet s recent open letter from JP Morgan Chase & Co isn't just another technology memo, it's a clarion call from within a financial giant that commands one of the largest technology budgets in the world.[1]
The modern ‘software as a service’ (SaaS) delivery model is quietly enabling cyber attackers and – as its adoption grows – is creating a substantial vulnerability that is weakening the global economic system. - Pat Opet, CISO, JPMC
He highlights three points:
After working alongside JPMC's security team during my FireEye and Mandiant days, I've seen firsthand how they identify emerging threats before they hit the mainstream. Their massive scale, supporting 57,000+ technologists across operations in over 100 countries, gives them unparalleled visibility into the security ecosystem. With $600 million dedicated specifically to cybersecurity and over 3,000 security professionals on staff, JPM's warnings carry extraordinary weight.[2][3]
What grabbed me about Pat's letter wasn't just his SaaS warnings, but how directly they apply to AI adoption. The security gaps he highlights become far more dangerous in AI environments.
Breaking Down Security Walls
Pat clearly identifies the core problem: modern integration patterns have torn down traditional security boundaries. These patterns use identity protocols like OAuth to create direct connections between third-party services and internal systems.
At JPMorganChase, we've seen the warning signs firsthand. Over the past three years, our third-party providers experienced a number of incidents within their environments. These incidents across our supply chain required us to act swiftly and decisively, including isolating certain compromised providers, and dedicating substantial resources to threat mitigation. - Pat Opet, CISO, JPMC
This creates serious risks for AI security. When authentication and authorization collapse into simplified interactions, AI systems gain unprecedented access to sensitive data. These weak integration points become perfect targets for prompt injection attacks, which have become one of the most exploited AI attack vectors in 2025.[4]
Opet gives a simple but powerful example: a calendar app can access corporate data through "read only roles" and "authentication tokens." Now imagine that same access path exploited by attackers to manipulate an AI system with access to financial data or security controls.
The Real-World Impact Is Already Here
The consequences aren't theoretical. Recent reports show prompt injection attacks have grown dramatically, enabling attackers to override model behavior, leak confidential data, and execute malicious instructions by manipulating input.[5]
In regulated industries like healthcare and finance, these vulnerabilities create not just security risks but compliance failures that can trigger significant data breach penalties.
Three Must-Have AI Security Elements
We need a comprehensive approach that aligns with the warnings in Opet's letter:
1. Visibility: Finding Hidden AI Tools
Shadow AI—unauthorized tools that employees use without IT oversight—creates significant risks. This is exactly what Opet means when discussing "firms' sensitive internal resources."
Organizations need to:
According to security experts, about 1 in 3 data breaches in 2024 involved shadow data—information outside company-controlled systems—making this visibility critical.[7]
2. Control: Safe AI Adoption
Traditional security boundaries have been dismantled, as Opet points out. For AI, we need new control mechanisms that:
Recommended by LinkedIn
3. Response: Protecting Models and Data
Opet warns that "attackers are now actively targeting trusted integration partners." This threat has become increasingly serious for AI systems, with recent research documenting sophisticated AI supply chain attacks targeting everything from training data to model weights.[8]
Companies need:
Build Security In From The Start
The core message in Opet's letter is clear: "Providers must urgently reprioritize security, placing it equal to or above launching new products."
For AI, this isn't optional—it's essential. Security must be part of AI systems from the beginning, not added later. This means:
Your Action Plan For Monday Morning
Don't wait for a major incident to act. Start with these three concrete steps:
The companies that thrive in the AI era won't be those that move fastest—they'll be those that build on secure foundations.
The Stakes Have Never Been Higher
We must establish new security principles and implement robust controls that enable the swift adoption of cloud services while protecting customers from their providers' vulnerabilities. Traditional measures like network segmentation, tiering, and protocol termination were durable in legacy principles but may no longer be viable today in a SaaS integration model. Instead, we need sophisticated authorization methods, advanced detection capabilities, and proactive measures to prevent the abuse of interconnected systems. - Pat Opet, CISO, JPMC
As Opet emphasizes, this is a challenge that "demands our collective immediate attention." The risks he identifies in SaaS models are amplified by AI's power and reach.
Organizations that build AI on secure foundations will thrive. Those that rush AI deployment without addressing the fundamental security flaws Opet identifies will face potentially catastrophic consequences.
The warning from JPMorgan Chase should serve as a catalyst for the entire security industry. We have a narrow window to get this right—to build AI systems where visibility, control, and response capabilities aren't afterthoughts but core components of the architecture.
As Opet concludes, responding to this challenge requires us to act "decisively, collaboratively, and immediately."
I'm curious: Has your organization taken concrete steps to address these AI security challenges? What's working? What's not? Let's share our experiences in the comments.
Sharat Ganesh is the Head of Product Marketing at WitnessAI. He previously held senior leadership roles in Product Management, Product Marketing and Engineering at Lacework, Google, Tanium and Mandiant/FireEye.
[1] JPMorgan, "Technology," 2025 [2] SecurityWeek, "With $600 Million Cybersecurity Budget, JPMorgan Chief Endorses AI and Cloud," January 2023 [3] TechTarget, "JPMorgan Chase CISO explains why he's an 'AI optimist,'" 2023 [4] NeuralTrust, "The 10 Most Critical AI Security Risks in 2025," March 2025 [5] IBM, "Security roundup: Top AI stories in 2024," April 2025 [6] Secureframe, "Latest Data Breach Statistics," January 2025 [7] Secureframe, "Latest Data Breach Statistics," January 2025 [8] SecurityWeek, "Cyber Insights 2025: Artificial Intelligence," January 2025
Founder @ SentraCoreAI™ Foundation (SCAiF) | SCAiF Academy™ + SentraEd™ | Auditing AI, CyberSec, Compliance, Autonomous Sys and Robotics — for public good, global safety, the future of humanity. | Political Strategist
2dThat’s exactly where SentraCoreAI™ Foundation (SCAiF™) steps in. SentraCoreAI™ is pro-AI, pro-business, and pro-human trust. We were early—investors didn’t get it. So instead of waiting, we built the oversight infrastructure the world now urgently needs—and we’re giving it away. The recent open letter from JPMorganChase confirms everything we’ve been warning about: the unchecked integration of SaaS, AI, and cloud platforms is eroding the very boundaries that once protected global systems. The failure isn’t just technical—it’s architectural, ethical, and systemic. We provide: – Free global audits – Forensic Black Reports™ – SentraLoop™ trust observability – Daily public education We don’t replace tools. We don’t sell products. We don’t refer services. We audit, score, and educate—so your systems remain yours, but accountable. In 2025, 42% of companies paused or reversed AI adoption due to legal risk, confusion, and lack of oversight. We estimate 4 out of 5 of those projects could’ve safely moved forward with the trust infrastructure we built. We’re especially focused on helping developing nations, small businesses, critical infrastructure, and public agencies. The future of AI trust will be audited, earned, and shared.