Invisible Saboteurs in Networked Control Systems
In our increasingly connected world, many critical systems—from power grids to manufacturing plants—rely on networked communication between controllers and the machines they manage. These systems, known as discrete event systems (DES), operate by responding to sequences of events, like switches flipping or sensors triggering. But what happens when the communication channels themselves become battlegrounds for cyber attacks? Xiaojun Wang, Shaolong Shu, and Feng Lin from the University of Shanghai for Science and Technology, Tongji University, and Wayne State University have tackled this question head-on, revealing a subtle yet profound challenge: cyber attacks introduce uncertainty so deep that the very language describing system behavior becomes fuzzy and unpredictable.
When the System’s Story Becomes a Choose-Your-Own-Adventure
Imagine a control system as a storyteller narrating a sequence of events—its “language.” Under normal conditions, this story is clear and deterministic: given a starting point and a set of rules, the system’s behavior unfolds predictably. But cyber attacks act like mischievous editors who can delete, insert, or rewrite parts of the story as it travels between the plant (the machine) and the supervisor (the controller). This meddling creates a nondeterministic narrative, where multiple possible storylines emerge from a single event sequence. The supervisor no longer knows exactly what’s happening or what commands are truly being executed.
Wang and colleagues describe this phenomenon using two conceptual boundaries: the “large language,” which represents the broadest set of possible behaviors the system might exhibit under attack, and the “small language,” the guaranteed core of behaviors the system will always perform despite interference. While previous research focused on keeping the system safe by constraining the large language, this new work zeroes in on the small language—ensuring that even under attack, the system can reliably accomplish essential tasks.
Redefining Control and Observation in a World of Deception
To navigate this fog of uncertainty, the researchers introduce two new concepts: CA-S-controllability and CA-S-observability. These extend classical ideas of controllability (the ability to guide the system’s behavior) and observability (the ability to accurately perceive the system’s state) into the treacherous terrain of cyber attacks.
CA-S-controllability ensures that the supervisor can still enforce control commands despite attackers potentially enabling or disabling certain controllable events. CA-S-observability guarantees that the supervisor’s observations—already muddled by attackers who can insert, delete, or replace sensor signals—are sufficient to make correct control decisions. Together, these conditions form a rigorous framework to determine whether a supervisor can be designed to maintain a desired baseline of system behavior, the small language, even when under siege.
Why the Small Language Matters More Than Ever
In traditional supervisory control, if the system’s language meets controllability and observability, a supervisor can be designed to achieve desired behaviors. But cyber attacks break this neat picture. The plant’s own language may no longer be controllable in the CA-S sense, meaning the system’s guaranteed behavior shrinks dramatically. This is counterintuitive: the system you thought you controlled might be slipping through your fingers without you realizing it.
Wang and team show that by carefully carving out the largest sublanguage of the plant’s behavior that remains CA-S-controllable—called Lna(G)—they can restore a foothold for control. If the required tasks (the specification language) fit within this sublanguage, it’s possible to synthesize a supervisor that ensures these tasks are always performed, no matter how the attacker tries to rewrite the story.
From Theory to Practice: Designing Resilient Supervisors
The paper doesn’t stop at theory. It provides constructive methods to build supervisors based on state estimates derived from the altered observations. These supervisors anticipate the worst-case manipulations by attackers and only enable events that remain safe and observable under all attack scenarios. The authors prove that if the required language is CA-S-controllable and CA-S-observable, their supervisor achieves exactly that language as the small language of the supervised system under attack.
Moreover, when the required language isn’t directly achievable, they identify the smallest superlanguage that is CA-S-controllable and CA-S-observable, offering a best-possible approximation. This nuanced approach balances safety and functionality, ensuring systems don’t just avoid disaster but continue to perform critical operations.
Implications Beyond the Lab
This work shines a light on the fragile dance between control and security in cyber-physical systems. It reveals that cyber attacks don’t just threaten to disrupt or damage systems—they fundamentally alter the very fabric of how systems can be controlled and observed. By formalizing these effects and providing tools to design resilient supervisors, Wang, Shu, and Lin offer a roadmap for engineers to build systems that can withstand deception and manipulation.
In a world where cyber attacks are not a question of if but when, understanding and controlling the “small language” of system behavior could mean the difference between a system that fails silently and one that continues to serve its essential purpose, no matter the chaos around it.
As the authors look ahead, they plan to explore how to simultaneously guarantee safety (using the large language) and task performance (using the small language), pushing the frontier of resilient control even further.