In the messy world of cybersecurity, diagrams called Attack Trees have become a trusty compass. They map a hacker’s goal at the top, then branch into subgoals and leaf steps that a defender could imagine as attack paths. They promise qualitative insight — what must happen for a breach to succeed — and quantitative cues — how long or how costly it might be.
The new work from the University of Twente and collaborators asks a hard question: what if the language these trees speak isn’t anchored to a shared picture of reality? It’s not about more complex math; it’s about clearer concepts. The authors interrogate four kinds of ontological mischief in static attack trees: semantic overload, missing concepts, thin modeling guidance, and a fragmentation that keeps tools from talking to one another. They propose grounding risk modeling in a robust ontology — the Common Ontology of Value and Risk, built on UFO — and lay out a path to interoperability that could change how risk is modeled across teams and organizations. The study is led by Italo Oliveira of the University of Twente, with Stefano Nicoletti and Gal Engelberg as co-authors, and collaborators from the University of Haifa and Accenture.
To follow their argument is to watch a familiar toolbox try to stand on a more solid floor. Attack Trees have long been used to visualize attacker goals and to compute metrics like minimal time or cost to breach a system. But as the authors show, without a shared, explicit theory of what every symbol really means, those calculations can look precise while actually being flimsy in interpretation. The paper doesn’t trash Attack Trees; it invites a reformulation: make the underlying concepts explicit, make the relationships among concepts explicit, and connect the models to well-founded theories of risk and value. It’s a call for ontological clarity that could ripple through how organizations study risk, share data, and combine multiple assessment tools.
In short, the authors argue that risk modeling should be more than a neat diagram. It should be a narrative that can be read by machines and humans alike, with the same vocabulary, the same anchors, and the same rules of interpretation. If that sounds abstract, think of it as building a city on a shared map: you can navigate, compare neighborhoods, and plan infrastructure only if everyone agrees on what each block and landmark actually is. The paper speaks to this shared map through the lens of a specific, practical backbone: COVER, a common ontology of value and risk that sits atop UFO, the Unified Foundational Ontology. The result is a framework that not only explains “what” an attack tree shows, but also “why” the nodes and edges stand for what they do in the real world.
Ontological grounding for attack trees
At first glance, an Attack Tree is simple: a root node at the top, branching into subgoals, with leaves representing the smallest, indivisible steps an attacker might take. But the paper argues that simplicity can hide a serious problem. The same symbol can drift between meanings as soon as you start mapping it to a real risk scenario. A node labeled Get PIN might be read as a goal, a step, a type of event, or even a proposition. The edges and gates that connect nodes — the ANDs and ORs that supposedly encode how subgoals combine — carry a tangle of possible interpretations as well. In other words, the diagram is a kind of ambiguous shorthand unless it is anchored to a shared ontology that fixes what each symbol stands for in the world of risk.
That is where COVER and UFO come in. COVER is a core ontology of value and risk, built on the idea that risk is relational and contextual: threats, vulnerabilities, assets, and the people who care about them all participate in a web of dispositions, events, and states. UFO, the foundational ontology, divides reality into TYPES and INDIVIDUALS, with concrete individuals like a specific database instance or a particular hacker, and abstract ones like a vulnerability or a capability. Put together, they offer a vocabulary and a set of rules for how risk concepts relate to each other, and how to talk about them unambiguously across tools and datasets.
Oliveira and colleagues argue that this ontological backbone matters because it makes three promises Attack Trees alone cannot guarantee: semantic clarity, disciplined coverage of domain concepts (like assets, stakeholders, and vulnerabilities), and a clear path for interoperability among models and data sources. It’s the difference between a map that looks like a sketch and a map that actually encodes the terrain in a way that a computer can reason about and a human can audit. The authors describe risk as a relational phenomenon: threats and opportunities, losses and gains, and the contextual situations that tilt the odds of one outcome over another. A robust ontology makes those relationships explicit, which in turn makes any computed security metrics—minimal attack time, cost, or probability—more trustworthy and comparable across contexts.
The four ontological blind spots
The core of the paper is a careful audit of four broad shortcomings in the standard Attack Tree language when viewed through the COVER/UFO lens. These aren’t nitpicks; they are structural gaps that affect how models can be built, read, and reused.
The first blind spot is semantic overload. In practice, a single AT node can be interpreted as a goal, a situation, an event, an event type, or a proposition. The gates that connect nodes — has, formedBy, and their logical cousins — can be read in any number of ways as well. The same formula n ∨ (t ∧ p) could be read as a simple boolean calculus, or as a story about an attacker’s intention and the events that propel it. Without explicit domain knowledge embedded in the model, the same diagram can justify different, even contradictory, readings in different organizations. That ambiguity cripples the ability to compare models or to reuse a diagram in a new data environment.
The second blind spot is ontological incompleteness. The AT language lacks essential domain concepts in a way that forces analysts to improvise. Consider the notions of vulnerability, asset, and stakeholder — core pieces in any real risk assessment. If you map an AT leaf to a “security measure,” you may be forced to juggle multiple interpretations of what that leaf actually refers to. The authors show with concrete illustrations that you can’t reason about risk properly if you don’t know who owns what, what could be exploited, and how those elements relate to the attacker’s opportunities and the defender’s objectives. This isn’t a pedantic quarrel about terminology; it’s about whether a model can actually support sound decisions when new data arrives from different sources or when you fuse multiple risk techniques together.
The third blind spot is limited modeling guidance. The Attack Tree language provides a way to decompose goals, but it gives little in the way of patterns or grammatical constraints that keep models aligned with a domain ontology. Without modeling guidance that ties leaves, gates, and root goals to well-defined concepts, analysts face a heavy cognitive burden and a high risk of drift—where a diagram that started out precise gradually becomes a patchwork of informal labels and ad hoc interpretations. Mapping an AT to established taxonomies like MITRE ATT&CK or CAPEC can help, but it does not on its own cure the deeper ontological misalignments. The authors emphasize that you need patterns grounded in risk theory itself, not just a catalog of techniques, to build meaningful models that can travel across teams and tools.
The fourth blind spot is a yawning gap in semantic interoperability. Data from different sources—internal logs, public taxonomies, enterprise risk assessments—need to talk to each other. If each source speaks a slightly different language, stitching them into a single, coherent risk picture becomes a manual, error-prone task. The lack of a common, ontologically sound foundation means two teams could produce visually similar Attack Trees that, in fact, encode different meanings. The result is not just awkward tools; it’s miscommunication at scale, a barrier to sharing insights and orchestrating coordinated responses across networks of partners and vendors.
From theory to practice a path toward interoperable risk models
So what do Oliveira and colleagues actually propose to fix these holes? Their answer is both conceptual and practical: embed Attack Trees in a broader, ontologically grounded framework for risk modeling. The core idea is to treat Attack Trees as a service layered on top of a comprehensive, domain-aligned knowledge base rather than as an isolated DSL. They sketch a general approach that blends Conceptual Modeling, Applied Ontology, Formal Methods, and Semantic Web technologies to create an interoperable backbone for risk analysis.
Concretely, they advocate grounding the model in COVER and UFO, then representing the domain as an RDF graph populated with the concepts of risk: risk subjects, threat objects, assets, vulnerabilities, capabilities, contexts, and events. The Attack Tree would still exist as a construct for reasoning about scenarios, but its leaves, gates, and root would be interpreted through the meta-model supplied by the ontology. In other words, you map the AT into a well-understood ontological scaffold, run analyses on the data stored in a linked-data framework, and then translate results back into a form that security teams can act on. This preserves the practical utility of ATs while eliminating the semantic drift that has plagued them for decades.
One of the paper’s practical provocations is to envision an ecosystem where an Attack Tree model is generated or refined not in isolation but as a query over an underlying knowledge graph. Thanks to Semantic Web technologies, analysts could pose a SPARQL query to retrieve all relevant threat events and their relationships, then render an attack scenario in the Galileo-like notation used in fault trees. The authors are careful to note that their proposal is not a rejection of existing methods but a pathway to unify them under a single, coherent ontology. The goal is to enable seamless data exchange, reproducible analyses, and more robust reasoning across disparate risk management tools.
Crucially, the authors argue that this ontologically grounded approach need not replace current tools; it can augment them. Attack trees would become one of several services offered by a broader DSL—one grounded in a well-founded theory of risk propagation and value. This modularity matters in practice: organizations often rely on a patchwork of techniques (fault trees, Bayesian networks, risk matrices, etc.). If these techniques can be orchestrated through a shared ontology, teams could switch perspectives without losing context, leading to more resilient security posture and more transparent governance of risk data.
Why this matters now
The push toward ontological rigor arrives at a moment when risk data flows are more diverse than ever. Enterprises accumulate logs and asset inventories from cloud providers, endpoints, identity systems, and third-party services. Public taxonomies like MITRE ATT&CK or CAPEC offer valuable catalogs of adversary techniques, but as this paper argues, relying on them alone risks embedding an unspoken, informal ontology into critical decisions. The authors’ framework invites risk managers to anchor these catalogs in a shared theory of risk, to distinguish between threats that are probable and those that simply exist in the imagination of a model, and to reason about how different pieces of information interact to produce a loss or a gain for different stakeholders.
There is more at stake than tidy diagrams. The proposed path could enable genuine interoperability across organizations, tools, and datasets. If a vendor’s risk calculator and a customer’s threat intelligence feed can be reconciled through a common ontology, then security recommendations become more consistent, audits become more reproducible, and security investments can be compared on a level playing field. It also opens the door to integrating multiple risk techniques—fault trees, Bayesian analyses, and even machine-assisted reasoning—without forcing practitioners to translate between incompatible languages. That is the promise of an ontology-guided DSL: a shared language for risk that remains flexible enough to accommodate diverse contexts and evolving threat landscapes.
There’s a human payoff as well. When a model is built atop a clear theory of risk, the explanations that accompany its conclusions become more credible. The same metrics that drive decisions—cost, time, likelihood, damage—can be traced back to explicit concepts and relationships, making it easier for teams to justify why a particular defense is recommended or why a risk severity score shifts after a new data source is added. And if the world does change, as it always does, updating a model becomes less error-prone because the new facts can be reconciled against a well-defined ontology rather than a collection of loosely connected labels.
The paper emerges from a collaboration spearheaded by the Semantics, Cybersecurity, & Services group at the University of Twente in the Netherlands, with contributions from the University of Haifa and Accenture’s Center of Advanced AI, among others. The lead author, Italo Oliveira, and his co-authors Stefano Nicoletti and Gal Engelberg lay out a concrete research program: ground risk modeling in COVER/UFO, build interoperable data representations in RDF/OWL, and treat Attack Trees as a service of a broader, ontology-driven DSL. It’s a thoughtful blend of theory and pragmatism, a reminder that in security as in science, precision means little if you don’t first agree on the language you’re using to describe the world.
As the field moves forward, the debate that Oliveira and colleagues ignite will likely become louder. Will organizations adopt ontology-based risk modeling as a standard, and if so, how long will it take to migrate legacy tools and datasets? The authors acknowledge that this is not a trivial transition. It requires cultural shifts as much as technical ones, and it demands careful attention to data governance, model provenance, and the messy realities of real-world security operations. But their argument is hard to dismiss: you cannot build a fortress on a map that keeps changing its own scale. Ontology provides the scale, the anchors, and the guiding lines so that when a new threat emerges, defenders can reason about it with a shared language, a common sense of purpose, and a transparent chain of reasoning that others can audit and extend.