Two devices walk into a shared space, not with radios blinking or screens flashing, but with motion, position, and posture. The message is encoded not in a waveform but in the way the sender changes its own state, whether by location, speed, or orientation. The observer, a distant sensor or camera, reads those changes and decodes a stream of bits the way we read Morse code from a blinking lamp. This is the provocative idea behind communication via sensing, a twist on the idea that signals travel through space more than through cables or antennas. It asks a simple but surprising question: can a sensing device itself act as a carrier of information, using the receiver s sensing abilities as the channel?
The research behind this question comes from Imperial College London, with collaboration from Bilkent University. The authors Mohammad Kazemi and Deniz Gunduz at Imperial, together with Tolga M. Duman at Bilkent, cast the question as a clean information theoretic problem. They treat the sensed attribute as a state that evolves over time, constrained by practical limits such as how fast the state can change or how much energy must be spent to move from one state to another. In other words, their sender does not emit a signal in the usual sense; it steers the world around it in a way that a receiver can observe and interpret. The crucial twist is that the receiver s observations are noisy; it may misread the state or its evolution, just as a radar return can be imperfect. The authors call this framework a finite-state channel with a cost constraint, and their aim is to pin down the fundamental limit of how much information can be conveyed under those rules.
What might seem like a purely theoretical curiosity carries a sharper edge when we consider the practical appetite for sensing in modern networks. Today s devices are increasingly multi-functional: a radar can double as a tracker for location, a camera can infer motion, and a wireless sensor might be used for both monitoring and control. The paper asks whether we can leverage the sensing capabilities that many devices already possess to transmit data in a different way, without adding a dedicated radio transmitter. It s not just a quirky thought experiment. If you can bound how much information can be sent by clever state changes, you start to understand whether this mode of communication could ever compete with traditional wireless links, or if it serves best as a complementary channel for specialized use cases such as secure, low-power communication in environments where radiative transmission is expensive or risky.
Crucially, the authors do more than pose a question; they build a rigorous framework. They show that you can recast the problem as an optimization over the behavior of a Markov process that governs state transitions, with an average cost constraint that captures energy or other resource limits. From there, they derive what they call a single-letter upper bound on the capacity of this sensing-based channel. In plain terms, they establish a ceiling on the rate at which information can be conveyed when the transmitter communicates by changing a sensed attribute and the receiver reads those changes with noise. Their technique stays faithful to the mathematics of information theory while translating the problem into something that can be computed and compared to more traditional bounds. The work culminates in a concrete example that, while stylized, shows the bound is tight when the sensing channel is a binary symmetric channel, a classic model for noisy observation.
A new kind of message from sensing
Imagine a robot whose position and speed are not just a consequence of its mission but a coded message. If the robot moves in a pattern dictated by its data, a radar far away can observe its trajectory and infer the intended bits. In this setup the transmitter is not choosing a modulation scheme to ride on a radio wave; it chooses a state trajectory for its own physical attributes, and the receiver, by sensing those attributes, decodes the information. The biology of the metaphor is apt here: information is not carried by a signal in the air, but by a choreography of states that the observer can detect with its own sensing toolkit.
What makes the framework robust is the explicit acknowledgement of the costs and constraints that real devices face. The state s i at time i may be drawn from a finite set, and moving from one state to another is not free. There is a cost k(si|si-1) associated with each transition, capturing energy expenditure or time. There is also a limit on the average cost Γ, which means the device cannot mutate its state willy-nilly; it has to pace itself. These constraints are not afterthoughts but the heart of the problem, because they determine whether a state-based channel can even sustain any meaningful communication at all.
The authors begin with a careful mathematical setup. They model the system as an irreducible finite-state channel with input X, state S, and output Y. The receiver observes Y and the state evolves as a function of the previous state and the chosen action, with the noise and imperfections captured in the observation model. Importantly, the initial state is known to both sides, but the long-run behavior is governed by the Markovian structure of the state transitions and the cost constraint. This is where information theory meets a practical constraint: you cannot pretend the world is costless or memoryless. The capacity bound is thus built to respect memory and expenditure, not just a single instantaneous snapshot.
From state dynamics to capacity bounds
One of the paper s clever steps is to reframe the problem in a way that makes the optimization tractable. Instead of maximizing over all possible input sequences XN, the authors show you can equivalently maximize over sequences of states SN. The mutual information I(XN; YN | S0) can be recast as I(SN; YN | S0), thanks to the deterministic coupling between state transitions and inputs. This is a powerful shift because, under the average cost constraint, the system acts like a constrained input channel where the states themselves are the levers by which you encode information.
From there, the authors introduce a dual capacity upper bound. This is a familiar idea in information theory: instead of wrestling with the full high-dimensional optimization, you upper-bound the capacity by fixing an auxiliary output distribution and measuring how far your actual output is from it in a KL-divergence sense. In the unconstrained case, the bound simply collapses to taking the cycle with the largest per-step metric, and the capacity is governed by the cycle that yields the most information per step. In the constrained case, the story gains subtlety: you must allocate your weight across different cycles in a way that respects the average cost Γ. The authors formalize this as a small set of cycles, each with its own average metric and average cost, and then solve a single-letter optimization that balances maximizing information per cycle against staying within the budget set by Γ.
What emerges is a compact, computable upper bound expressed in terms of the normalized metrics of the cycles and their costs. The intuition is elegant: if you imagine the set of possible state walks as a graph, then every walk is a combination of cycles and a short path; in the long run, the influence of the starting point or the tiniest tail of the walk vanishes, and the average contributions come from how often you traverse each cycle and how much each cycle costs. The bound becomes a weighted sum over cycles, where the weights reflect how frequently the system is expected to visit each cycle under the cost constraint.
Why this could reshape sensing and networks
Beyond the math, the idea matters because it reframes how we think about communication in a world saturated with sensing devices. A camera can sense the scene, a LiDAR sensor tracks a moving object, a radar panel monitors a drone s flight. If those sensing assets can also carry information through their state trajectories, networks could gain a new, ultra-low-power channel that piggybacks on the natural dynamics of devices. In scenarios where radiative transmission is costly or undesirable—think of energy-constrained IoT, stealth sensing, or dense urban environments where spectrum is tight—information via sensing could provide a secondary handshake, a backup channel, or even a secure tunnel that doesn t radiate much at all.
The study also clarifies the limits. The authors show a case with a two-state finite-state channel where the sensing is noisy and acts like a binary symmetric channel from state to observation. They derive a closed-form bound for this scenario, and crucially compare it to an existing lower bound obtained by numerical methods. The findings are striking: the upper bound closely tracks the lower bound across a broad range of the cost budget and noise levels. In other words, the gap between what is theoretically possible and what is computationally achievable becomes tight enough to be practically informative—useful not just as a theoretical curiosity but as a guideline for engineers designing sensing-based communication systems.
And the implications extend to the design of future networks. The framework invites engineers to think about how to structure devices and their control policies to optimize a combined objective: achieve reliable sensing for the tasks at hand while reserving a corridor of state-driven communication that can convey auxiliary information. It hints at an era where sensing modulates not only how we interpret the world but how we share information about it—without always turning up the power dial on a traditional transmitter. In security terms, the fact that information is carried by state evolution rather than a radiated signal could offer different exposure profiles, potentially enabling more covert or more controllable channels in certain contexts.
Towards a richer picture of future networks
Of course, a single paper cannot rewrite the physics of wireless channels, but it can light a path. The work by Kazemi, Duman, and Gunduz makes a convincing case that there is a rigorous, useful theory behind sensing-based communication. It provides a precise way to talk about what is possible and what is not when the observer s toolbox is the sensing system itself. In a field where researchers are often chasing an elusive sense of integrated sensing and communication, this paper treats sensing as a first-class carrier of information rather than a mere byproduct of surveillance or mapping tasks. The key takeaway is not a new protocol ready for deployment, but a principled framework that tells us when a sensing-based channel can be a credible data pipe and how to quantify its capacity under real-world constraints.
The collaboration across institutions—Imperial College London and Bilkent University—speaks to a broader trend: the most exciting ideas in information theory often crystallize at the intersection of theory and practical sensing. The paper makes clear that the path from an idea to a usable technology passes through careful modeling of memory, constraints, and the ways in which a system revisits the same states. Those cycles are not just mathematical artifacts; they are the rhythms of real devices, the repeated patterns that a receiver can latch onto to recover a message. The study doesn t pretend to have all the answers, but it equips researchers with a concrete lens to ask the right questions about what to optimize, how to measure performance, and where to push for breakthroughs in energy efficiency and reliability.
In short, communication via sensing reframes the conversation about what it means to talk to machines. It foregrounds the idea that the world itself can be a channel and that, with the right constraints and the right math, even the slow drift of a device s state can carry meaningful information. If the future of networks is a blend of signals and sensing, this work gives us the map to navigate that blend with rigor and imagination.