JP Morgan Chase CISO's Security Wake-Up Call: Security should be baked in, not bolted on.

JP Morgan Chase CISO's Security Wake-Up Call: Security should be baked in, not bolted on.

When the CISO of a bank that spends $17 billion annually on technology speaks, the industry listens. Pat Opet s recent open letter from JP Morgan Chase & Co isn't just another technology memo, it's a clarion call from within a financial giant that commands one of the largest technology budgets in the world.[1]

The modern ‘software as a service’ (SaaS) delivery model is quietly enabling cyber attackers and – as its adoption grows – is creating a substantial vulnerability that is weakening the global economic system. - Pat Opet, CISO, JPMC

He highlights three points:

  • Software providers must prioritize security over rushing features. Comprehensive security should be built in or enabled by default.
  • We must modernize security architecture to optimize SaaS integration and minimize risk.
  • Security practitioners must work collaboratively to prevent the abuse of interconnected systems.

After working alongside JPMC's security team during my FireEye and Mandiant days, I've seen firsthand how they identify emerging threats before they hit the mainstream. Their massive scale, supporting 57,000+ technologists across operations in over 100 countries, gives them unparalleled visibility into the security ecosystem. With $600 million dedicated specifically to cybersecurity and over 3,000 security professionals on staff, JPM's warnings carry extraordinary weight.[2][3]

What grabbed me about Pat's letter wasn't just his SaaS warnings, but how directly they apply to AI adoption. The security gaps he highlights become far more dangerous in AI environments.

Breaking Down Security Walls

Pat clearly identifies the core problem: modern integration patterns have torn down traditional security boundaries. These patterns use identity protocols like OAuth to create direct connections between third-party services and internal systems.

At JPMorganChase, we've seen the warning signs firsthand. Over the past three years, our third-party providers experienced a number of incidents within their environments. These incidents across our supply chain required us to act swiftly and decisively, including isolating certain compromised providers, and dedicating substantial resources to threat mitigation. - Pat Opet, CISO, JPMC

This creates serious risks for AI security. When authentication and authorization collapse into simplified interactions, AI systems gain unprecedented access to sensitive data. These weak integration points become perfect targets for prompt injection attacks, which have become one of the most exploited AI attack vectors in 2025.[4]

Opet gives a simple but powerful example: a calendar app can access corporate data through "read only roles" and "authentication tokens." Now imagine that same access path exploited by attackers to manipulate an AI system with access to financial data or security controls.

The Real-World Impact Is Already Here

The consequences aren't theoretical. Recent reports show prompt injection attacks have grown dramatically, enabling attackers to override model behavior, leak confidential data, and execute malicious instructions by manipulating input.[5]

In regulated industries like healthcare and finance, these vulnerabilities create not just security risks but compliance failures that can trigger significant data breach penalties.

Three Must-Have AI Security Elements

We need a comprehensive approach that aligns with the warnings in Opet's letter:

1. Visibility: Finding Hidden AI Tools

Shadow AI—unauthorized tools that employees use without IT oversight—creates significant risks. This is exactly what Opet means when discussing "firms' sensitive internal resources."

Organizations need to:

  • Discover all AI applications across the company
  • Monitor conversations between people and AI systems
  • Track what data is being shared with which models

According to security experts, about 1 in 3 data breaches in 2024 involved shadow data—information outside company-controlled systems—making this visibility critical.[7]

2. Control: Safe AI Adoption

Traditional security boundaries have been dismantled, as Opet points out. For AI, we need new control mechanisms that:

  • Understand the context of AI use, not just which tools are being used
  • Keep data boundaries clear even as information flows between systems
  • Send sensitive queries only to secure, approved models

3. Response: Protecting Models and Data

Opet warns that "attackers are now actively targeting trusted integration partners." This threat has become increasingly serious for AI systems, with recent research documenting sophisticated AI supply chain attacks targeting everything from training data to model weights.[8]

Companies need:

  • Strong defenses against prompt injection
  • Tools to ensure AI systems behave as intended
  • Monitoring for unusual AI behavior

Build Security In From The Start

The core message in Opet's letter is clear: "Providers must urgently reprioritize security, placing it equal to or above launching new products."

For AI, this isn't optional—it's essential. Security must be part of AI systems from the beginning, not added later. This means:

  • Including security teams from day one of AI projects
  • Testing for AI-specific threats
  • Creating guardrails that allow functionality while maintaining boundaries

Your Action Plan For Monday Morning

Don't wait for a major incident to act. Start with these three concrete steps:

  1. Conduct an AI census - Document every AI tool and model in use across your organization, including those deployed by individual teams without central IT approval.
  2. Implement basic guardrails - Even simple measures like detecting and redacting PII in prompts can dramatically reduce your exposure. Nearly half (46%) of all breaches involve customer personal identifiable information.[6]
  3. Create an AI security working group - Bring together security, data science, and business leaders to develop governance that enables innovation while addressing the risks Opet highlights.

The companies that thrive in the AI era won't be those that move fastest—they'll be those that build on secure foundations.

The Stakes Have Never Been Higher

We must establish new security principles and implement robust controls that enable the swift adoption of cloud services while protecting customers from their providers' vulnerabilities. Traditional measures like network segmentation, tiering, and protocol termination were durable in legacy principles but may no longer be viable today in a SaaS integration model. Instead, we need sophisticated authorization methods, advanced detection capabilities, and proactive measures to prevent the abuse of interconnected systems. - Pat Opet, CISO, JPMC

As Opet emphasizes, this is a challenge that "demands our collective immediate attention." The risks he identifies in SaaS models are amplified by AI's power and reach.

Organizations that build AI on secure foundations will thrive. Those that rush AI deployment without addressing the fundamental security flaws Opet identifies will face potentially catastrophic consequences.

The warning from JPMorgan Chase should serve as a catalyst for the entire security industry. We have a narrow window to get this right—to build AI systems where visibility, control, and response capabilities aren't afterthoughts but core components of the architecture.

As Opet concludes, responding to this challenge requires us to act "decisively, collaboratively, and immediately."

I'm curious: Has your organization taken concrete steps to address these AI security challenges? What's working? What's not? Let's share our experiences in the comments.


Sharat Ganesh is the Head of Product Marketing at WitnessAI. He previously held senior leadership roles in Product Management, Product Marketing and Engineering at Lacework, Google, Tanium and Mandiant/FireEye.


[1] JPMorgan, "Technology," 2025 [2] SecurityWeek, "With $600 Million Cybersecurity Budget, JPMorgan Chief Endorses AI and Cloud," January 2023 [3] TechTarget, "JPMorgan Chase CISO explains why he's an 'AI optimist,'" 2023 [4] NeuralTrust, "The 10 Most Critical AI Security Risks in 2025," March 2025 [5] IBM, "Security roundup: Top AI stories in 2024," April 2025 [6] Secureframe, "Latest Data Breach Statistics," January 2025 [7] Secureframe, "Latest Data Breach Statistics," January 2025 [8] SecurityWeek, "Cyber Insights 2025: Artificial Intelligence," January 2025

Frank Carrasco

Founder @ SentraCoreAI™ Foundation (SCAiF) | SCAiF Academy™ + SentraEd™ | Auditing AI, CyberSec, Compliance, Autonomous Sys and Robotics — for public good, global safety, the future of humanity. | Political Strategist

2d

That’s exactly where SentraCoreAI™ Foundation (SCAiF™) steps in. SentraCoreAI™ is pro-AI, pro-business, and pro-human trust. We were early—investors didn’t get it. So instead of waiting, we built the oversight infrastructure the world now urgently needs—and we’re giving it away. The recent open letter from JPMorganChase confirms everything we’ve been warning about: the unchecked integration of SaaS, AI, and cloud platforms is eroding the very boundaries that once protected global systems. The failure isn’t just technical—it’s architectural, ethical, and systemic. We provide: – Free global audits – Forensic Black Reports™ – SentraLoop™ trust observability – Daily public education We don’t replace tools. We don’t sell products. We don’t refer services. We audit, score, and educate—so your systems remain yours, but accountable. In 2025, 42% of companies paused or reversed AI adoption due to legal risk, confusion, and lack of oversight. We estimate 4 out of 5 of those projects could’ve safely moved forward with the trust infrastructure we built. We’re especially focused on helping developing nations, small businesses, critical infrastructure, and public agencies. The future of AI trust will be audited, earned, and shared.

Like
Reply

To view or add a comment, sign in

More articles by Sharat Ganesh

Insights from the community

Others also viewed

Explore topics