Process-Informed Security: Why the Traditional Security Triad Fails in OT

Process-Informed Security: Why the Traditional Security Triad Fails in OT

Introduction

The CIA triad (Confidentiality, Integrity, Availability) is a topic that tends to generate strong opinions, but it continues to surface in discussions on cyber-physical risk. From the perspective of the process industries—such as refining, chemical plants, pipelines, and offshore facilities—it's important to revisit it with care and perspective.

The CIA triad was developed for enterprise IT in the 1980s, yet it continues to appear in industrial cybersecurity conversations—often promoted by engineers with an OT background who adopted it into process automation contexts. While this may have been done with the intent of aligning with established security frameworks, it has led to a mismatch between the model's original purpose and the realities of industrial operations. Some suggest extending it with additional concepts like Safety, Reliability, or Resilience. While these suggestions acknowledge valid concerns, they also obscure the more fundamental issue: that industrial environments depend on tightly coupled control between digital and physical domains. Effective security must therefore reflect how these systems actually operate in practice — where automated control logic, real-time feedback, and equipment coordination are tightly linked to physical outcomes. Ensuring safety and reliability requires safeguarding the core operational functions — those that allow the system to detect relevant physical conditions, apply correct control responses, and maintain operation within its intended parameters.

Why the Triad Falls Short in Industrial Contexts

Traditionally, process safety has focused on random hardware or component failures, with risks mitigated through redundancy and probabilistic design. However, cyber-physical attacks introduce intentional and coordinated manipulations of automation components, which traditional safety models did not account for. This has led to a false separation between OT security and process safety. In reality, industrial automation is an integral part of the physical process, and failures of control and safety functions—whether due to malfunction or malicious interference—are inherently process safety concerns. Therefore, process safety and cybersecurity should be addressed as an integrated discipline. A cyber-physical attack doesn’t just expose data as in IT—it can trigger unsafe states, damage equipment, or halt production by deliberately bypassing controls or executing multiple failures in a specific sequence.

Shifting from IT to OT Mindset

In enterprise IT, data confidentiality is often paramount, and risks are primarily information-centric. In contrast, industrial systems operate at the intersection of software and physical action. Here, the automation system doesn't just support the process — it is part of it. A compromise doesn’t just lead to data loss or system downtime; it can result in incorrect actuation, unsafe physical states, or damage to equipment. A breach of controllability, observability, or operability doesn't just degrade performance — it threatens the integrity and safety of the process itself. Recognizing this distinction is key to moving from an IT-centric model to a cyber-physical risk perspective.

Dr. Marina Krotofil’s Approach

In her 2015 research presentation "Damn Vulnerable Chemical Process," Dr. Marina Krotofil illustrated how attackers can exploit structural weaknesses in the controllability, observability, and operability of industrial systems. These aspects are not abstract properties — they define the practical ability to run, monitor, and influence a physical process within safe limits. Her work showed that when these functional dependencies are compromised, attackers can cause real-world damage without needing to break traditional IT security models.

Rather than extending the CIA triad, Krotofil's approach highlights the need to align cybersecurity with the core operational mechanisms of industrial systems. By focusing on controllability, observability, and operability, we move the conversation toward physical impact and process stability — ensuring protection of what ultimately matters: safety, continuity, and plant integrity.


The Origin and Purpose of the CIA Triad — and Its Misapplication in Industrial Contexts

The CIA triad was developed to guide security practices in enterprise IT by focusing on three fundamental properties:

  • Confidentiality: Preventing unauthorized access to information, ensuring that sensitive data remains private.
  • Integrity: Ensuring that data remains accurate, untampered, and trustworthy throughout its lifecycle.
  • Availability: Ensuring that data and systems are accessible to authorized users when needed.

Together, these principles aim to maintain trust in digital information and the systems that manage it — particularly in environments where the primary asset is data itself.

This model was developed for protecting information in enterprise environments, not for safeguarding control over physical processes. In operational technology (OT), the automation system is not merely an overlay — it is part of the process. Control functions operate valves, pumps, compressors, furnaces, and reactors in real time, translating software logic directly into physical actions.

In industrial settings, control systems depend on a combination of data, algorithms, and procedures that represent and influence the physical state of the process — including pressures, levels, temperatures, flows, and batch operations. Any deviation — whether due to malfunction, misconfiguration, or a cyber attack — can alter automated system behavior or mislead operators, resulting in unsafe or unintended actions.

Because control logic and sensor input are so tightly bound to physical reality, even subtle manipulations can drive the process out of its intended range. The separation between the digital and physical is nearly nonexistent; manipulating the information is effectively manipulating the system. The core objective remains the same: to maintain safe and stable operation within defined limits. In this context, process safety, operability, and equipment protection depend on the integrity and availability of the system’s control and information functions — not for their own sake, but because they anchor safe physical performance.

Applying the CIA model without adaptation can result in focusing on abstract properties rather than the functions that directly govern process safety and operational behavior.

IT vs. Industrial Systems: The Essential Difference

  • In IT: “Integrity” typically refers to ensuring that files, records, or communications have not been altered, deleted, or corrupted — whether accidentally or maliciously — and that they remain consistent with their original or authorized state.
  • In industrial systems: “Integrity” must also address whether physical parameters (e.g., pressure, temperature setpoints) remain valid and consistent with the intended design and expected performance. This includes adherence to physical constraints such as vessel volume, flow capacity, and material strength — not just the correctness of values, but whether those values remain within safe, enforceable limits. Operating conditions are part of this picture, but they are defined by those design and performance expectations — not separate from them — in other words, whether the system maintains operational integrity.

This contrast highlights why information-centric security approaches can fail in cyber-physical systems: they ignore that data and physical action are inseparable. A corrupted value is not just incorrect information — it's a distorted view of the physical process, and often, a trigger for unsafe or unintended behavior.


Why Extending the Triad Doesn’t Help

To fix this mismatch, some propose adding new terms, like Safety, Reliability, or Resilience, to the triad. However, this introduces unnecessary complexity:

  • Safety is an outcome, not a security property. It depends on maintaining operational integrity and availability of the system.
  • Reliability is a performance metric, not a threat model element.
  • Resilience is a system characteristic — it describes how quickly a system recovers from disruption, rather than forming a foundational security layer.

All these concepts already hinge on core process control attributes: observability, controllability, and operability. Adding them to the CIA triad may appear helpful, but it risks diluting the focus needed to manage physical process risk effectively.


Reframing CIA Using Process Control Fundamentals

Building on a process-aware security view, we could map CIA to process-control equivalents:

CIA Property Control System View

Confidentiality Protection of sensitive operational or business information from

unauthorized access — relevant when disclosure has potential

physical, financial, or regulatory consequences in specific sectors

Integrity Confidence that process data, control signals, and actuation faithfully

represent and enforce the intended physical behavior

Availability Ability to observe and intervene in real-time

However, it’s more effective to frame industrial security through the lens of cyber-physical risk:

  1. Controllability – Can we influence or adjust the process in real time?
  2. Observability – Can we see and interpret what’s happening in the process?
  3. Operability – Can we run the process within its designed operating limits and maintain expected performance?

These functional capabilities define whether the system can operate safely, respond to deviations, and recover within design constraints. In cyber-physical systems, there is virtually no separation between control information and physical behavior — a compromised input or instruction is effectively a compromised process action. Understanding how controllability, observability, and operability can be disrupted is essential for identifying how cyber intrusions can lead to real-world consequences: unsafe states, production loss, or equipment damage.

Framed this way, controllability, observability, and operability form a functional security triad for industrial systems — grounded not in information theory, but in physical process control:

  1. Controllability Security question: Can we still influence the process as needed? Why it matters: If actuation is blocked, hijacked, or misrouted, the process can't be stabilized or returned to safe conditions. Security loss: Actuator override, logic manipulation, interlock bypass.
  2. Observability Security question: Can we still trust what we see? Why it matters: Operators and control systems rely on sensor data to assess the state of the process. False data leads to false decisions. Security loss: Sensor spoofing, HMI blackout, display corruption.
  3. Operability Security question: Can the process still run within its designed operating window and constraints? Why it matters: Operability is about whether the system remains within enforceable physical and design limits. This includes respecting constraints such as temperature, pressure, material strength, and fluid level — and ensuring that digital representations (e.g., parameter limits, sequence logic, or control algorithm parameters like proportional gain, integral and derivative time, and sampling frequency) are aligned with the physical dynamics of the process. For example, improper tuning or sampling rates that violate Nyquist criteria can lead to instability, oscillations, or unsafe control behavior. If these parameters no longer reflect the actual process dynamics and limitations, the system may continue to operate in unsafe or unstable ways, even if controllability and observability appear intact. Security loss: Parameter reconfiguration beyond safe bounds, violation of operating windows, misalignment between control logic and physical capability.

Each of these dimensions represents a critical functional dependency that supports safe and correct operation of industrial systems:

  • Each can be individually compromised to disrupt the system.
  • Each supports the other — for example, observability without controllability has limited value, and operability relies on both being intact.
  • All three must remain aligned to maintain process safety and avoid physical consequences.

As with CIA, these dimensions are interdependent. But unlike CIA, which is grounded in abstract information properties, the COO triad is based on how industrial processes actually function. It reflects how systems act, react, and stay within safe boundaries in the physical world.


Functional Losses as the Real Security Hazards

When viewed from a process control perspective, cyber-physical threats manifest as disruptions to the system’s ability to maintain controllability, operability, or observability — whether through manipulation of digital inputs or through effects that degrade the system’s physical function.

  • Loss of Controllability: Blocking or hijacking actuation signals, overriding safety functions, disabling emergency shutdowns, interfering with manual or automated operation of control algorithms, or manipulating interlocks and permissives that restrict or allow process actions.
  • Loss of Operability: Corrupting setpoints and logic, misaligning operator interfaces — or physical degradation such as failed pumps, valves, or actuators that prevent operation within intended ranges. This also includes performance issues like excessively slow or fast valve travel rates — which can result in ineffective control or physical damage such as hydraulic shock — as well as restricted control margins that compromise the system’s ability to respond effectively within design constraints.
  • Loss of Observability: Sensor spoofing, sensor failure, HMI blackout, alarm suppression, or loss of integrity of operator information displays that misrepresent the actual process state.

These failures directly lead to unsafe states, production loss, or equipment damage. They also resonate with operations and safety teams, who can more clearly see how a cyber intrusion translates to real-world risks.

Example Attack Scenario: Consider an attacker who gains remote access to a water treatment plant’s PLC. By spoofing sensor data (loss of observability), they hide the actual chemical concentrations while simulating underdosing. Meanwhile, they override the chemical dosing control logic (loss of controllability), causing excessive chemical addition. Operators, unaware of the real chemical state, continue normal operations. This compromises water quality, posing a direct public health risk to the community. In addition, prolonged overdosing may lead to corrosion and degradation of tanks and piping. The issue isn’t unauthorized access to sensitive information — it’s the use of that access to manipulate data and control functions, ultimately disrupting the physical process.


Applying the COO Model to Industrial Risk Practice

1. Map Process Functions to COO Requirements

Action: Identify critical process functions that affect safety or production (e.g., reactor temperature control, compressor surge control, pipeline pressure regulation). For each function, outline your needs for:

  • Controllability: Which actuators or valves must remain reliable to maintain safe operating limits?
  • Observability: Which sensors or instrumentation detect unsafe conditions or key process variables?
  • Operability: What operating windows (pressures, temperatures, flows, etc.) must not be breached, and which control loops or system configurations enforce those bounds?

Benefit: Establishes a functional baseline for the system, making it easier to pinpoint vulnerabilities most likely to cause unsafe events.

2. Integrate Cybersecurity with Process Hazards Analysis

Action: Merge cybersecurity considerations into safety assessments (e.g., HAZOP, LOPA, Bowtie). During a HAZOP review, ask:

  • Could an attacker (or fault) override or block the relevant actuator (Controllability)?
  • Could a sensor be spoofed or hidden (Observability)?
  • Could setpoints or control tuning be pushed beyond safe thresholds (Operability)?
  • Could ranges be reversed or pushed beyond safe thresholds (Operability)?

Benefit: Ensures safety and security teams jointly assess avenues to unsafe conditions, including malicious manipulations that defeat traditional safeguards.


3. Perform a “COO-Focused” Threat Model

Action: Identify threats or attack vectors explicitly targeting each COO element:

  • Controllability: Unauthorized changes to control logic or setpoints in PLCs/DCS, hijacking signals to block or replace legitimate operator commands, or bypassing interlocks and permissives that ensure safe operation.
  • Observability: Sensor data injection at the field device level, alarm suppression, or tampering with historized data (e.g. process data used for trending and analysis).
  • Operability: Attacks that prevent the system from running within its design limits and expected performance. Examples might include:

Modifying or disabling required process sequences (e.g., start-up, shutdown, or batch operations), causing unsafe transitions or blocking normal production.

Reconfiguring equipment operating windows (pressure, temperature, level, and flow ranges) or mechanical constraints, pushing the process outside design boundaries and risking damage or unsafe states.

Restricting essential utilities (cooling water, instrument air, etc.) or interfering with critical support systems, forcing partial operation or unplanned shutdowns.

  • Rank threats by their potential for catastrophic release, equipment damage, and likelihood (e.g., exposure via segmentation gaps or firmware vulnerabilities). Document them in a risk matrix or preferably in a Bowtie diagram.

Benefit: Highlights the highest-risk targets threatening key process functions, guiding targeted mitigation efforts.


From CIA Triad Thinking to COO Triad Thinking

The CIA triad reflects priorities from enterprise IT, where data protection is paramount. In contrast, the **COO triad—Controllability, Operability, and Observability—**is anchored in the functional realities of process automation and cyber-physical risk. In industrial cybersecurity, the goal isn’t just to prevent data breaches but to prevent unsafe process conditions and ensure the continuity of operations—core objectives for process safety and production integrity.

  • Controllability ensures the capacity for accurate, timely interventions. Without control, even a well-monitored system can drift into hazardous conditions.
  • Operability ensures the system runs within its physical design limits and constraints, reflecting real-world capabilities.
  • Observability offers correct situational awareness to both human operators and automated systems.

Rather than adding buzzwords to CIA, the COO triad re-centers security on the fundamental conditions for safe, stable operation of industrial processes. Each dimension can be compromised individually, yet all three must work together to maintain process safety, production continuity, and equipment integrity.

This framework clarifies why OT security and process safety must converge. Control systems are inseparable from physical operations, and a failure in control—whether accidental or malicious—can quickly become a process safety incident. Addressing cyber-physical risk thus requires treating OT security and process safety as a unified discipline.

Adopting the COO triad is more than just changing terminology; it represents a shift to align cybersecurity with the engineering principles that shape industrial process control and safety. By emphasizing controllability, operability, and observability, we protect systems according to the same logic used in their design—maintaining stable operation within intended limits and preventing failures that could endanger people, equipment, and the environment. In essence, securing cyber-physical systems means ensuring they function and perform exactly as designed.


Aaron W. Bayles

Senior Manager, OT Security at Accenture

3w

Dr. Marina Krotofil COO strikes again

Hans-Christian Mose Jehg

Cyber, Software and System Engineer in IIoT and OT - with a keen liking for processes, and great at embedded software development. Computational Thinker.

1mo

Absolutely. I like the fact that you make the distinktion between Functional Safety and its probabilistic nature, from Cyber Security and its nature that is more akin to sabotage... ...and as you say, the models have lead to a false separation between Functinal Safety and Cyber Security. I think Cyber in Physical systems should be treated withe same kind of formalism as Functional Safety, and that Cyber can benefit very much from the experience gained through 50 years (see BS5304). There is am argument that FS was born formally with EN954-1 in 1996, notwithstanding there is a lot of ripe experience for Cyber to learn from, ...or rather to combine with...

Like
Reply

To view or add a comment, sign in

More articles by Sinclair Koelemij

Insights from the community

Others also viewed

Explore topics