The Quiet Quest for Private Randomness
The quantum world loves randomness the way a streetlight loves shadows: it’s built into the fabric, not something you manufacture. If a system is prepared in a perfectly balanced way and measured with a perfectly pure instrument, some outcomes appear truly unpredictable, even to a cunning observer who might hold every possible card of the deck. That kind of randomness is intrinsic, private, and a little miraculous: it can seed private keys in quantum cryptography or fuel genuinely unpredictable random numbers for science and society. But real experiments aren’t perfect. Measurements get noisy, devices drift, and each noisy readout carries a double charge of ignorance: what we don’t know about the quantum state and what we don’t know about the measurement itself. In other words, there’s intrinsic randomness and there’s extrinsic randomness—the latter arising from noise and, crucially, from an observer who might know more than we do. The study we’re unpacking asks a pointed question: given a quantum measurement with noise baked in, how much private randomness can we still extract from it?
The work comes from a team spanning the Barcelona Institute of Sciences Fotòniques (ICFO) and the University of Warsaw, among others, and it’s led by Fionnuala Curran and Antonio Acín. Their goal is to quantify maximal intrinsic randomness for realistic, noisy measurements, and to map out when that randomness can be trusted in the real world where an clever eavesdropper might scheme to guess outcomes. It’s a question that sits at the intersection of the foundations of quantum theory and the practical security we demand from quantum technologies. In plain terms: if you flip a quantum coin that’s regressed by noise, how sure can you be that no one—not even a crafty rival with a peek inside the machinery—can predict the result better than random chance?
To answer this, the authors lean on a careful formalism of quantum measurements, balancing the math with the intuition of a security game. They imagine Alice, the experimenter, performing a measurement on a quantum state, while Eve, the hypothetical eavesdropper, tries to guess the outcome using any possible knowledge she could have about the state or the measurement. The better Eve can guess, the less intrinsic randomness Alice has produced. The object of study is the maximal intrinsic randomness that can survive such an adversarial scenario. In quantum information language, this translates into a min-entropy problem: how uncertain is X—Alice’s outcome—when conditioned on Eve’s side information, after the given measurement M acts on the best possible input state?
The answer isn’t a single number; it’s a precise formula for two key families of noisy measurements, plus a general bound that carries across a broad class of measurements. The punchlines are surprisingly clean: for two-outcome measurements on a single qubit (the simplest quantum system), there’s a closed-form expression for the maximal intrinsic randomness that only depends on the spectrum of the smaller measurement element. In higher dimensions, for a canonical noisy projective measurement (a perfect basis readout drenched with white noise), the maximal intrinsic randomness matches the randomness you’d get if you started from a similarly noisy quantum state and optimized over all measurements. The upshot is a robust, testable link between how noisy a device is and how private the randomness it generates can be.
In what follows, I’ll walk you through the core ideas, the surprising strategic moves Eve can pull, and what this means for the kind of randomness we can trust from noisy quantum devices. I’ll keep the math light where possible, but I’ll also describe the key structures the authors use to reach their conclusions. The article ends with a look at the broader implications for quantum randomness generation, cryptography, and the open questions these results unleash.
Two-Letter Questions, Big Answers
One of the central moves of Curran and colleagues is to model real devices as measurements that aren’t perfectly sharp. In quantum language, a measurement is described by a POVM (positive-operator-valued measure). A pure, ideal measurement would be a projective measurement with distinct, perfectly sharp outcomes. But when you mix that ideal with noise, you get a two-outcome, qubit POVM that looks like a squashed coin: you still observe outcomes, but the statistics and the underlying operators carry noise. The team first tackles the question: for any such two-outcome qubit POVM M = {M1, M2}, what is the maximum Pguess, the probability that Eve can correctly guess Alice’s outcome, after minimizing over all possible input states |ϕ⟩? The intrinsic randomness is then −log Pguess, min-max optimized over Eve’s classical or quantum side information.
The remarkable result here—Theorem 1 in their paper—gives a crisp closed form. If you arrange the two POVM elements so that tr M1 ≤ tr M2, the optimal guessing probability is
Pguess* = 1 − tr M1 + 1/2 (tr sqrt(M1))^2.
Interpreting this is easiest if you picture the measurement’s “unsharpness” encoded in the spectrum of M1. The more M1 leans toward zero (a tiny weight on the first outcome), the more you squeeze Eve’s leverage, pushing Pguess* toward 1/d, the ideal private randomness. If M1 and M2 are equally weighted, the measurement is effectively maximally random from Eve’s perspective. If M1 is heavily weighted toward zero, Eve’s best guess becomes less informative, and the intrinsic randomness rises. In short, even within noisy real-world measurements, there’s a precise, universal footprint of how much private randomness you can squeeze out, governed purely by the spectrum of the measurement elements.
What’s elegant about this result is not only the formula, but what it says about the structure of optimal attacks. The authors show that the worst-case Eve’s strategy can always be reduced to a classical mixture of sub-measurements, with each sub-measurement aligned to Alice’s eigenbasis in a particular way. In the language of the paper, a pure input state for Alice suffices, and Eve’s information can be represented by a convex decomposition of M into simpler, extremal measurements. This nibble-sized insight—distilling a quantum game to a classical adversarial picture—gives the result both conceptual clarity and computational tractability.
Beyond qubits, the story stretches to higher dimension with a similar flavor. If you take a noisy projective measurement in dimension d and look at the optimal two-outcome coarse-graining (a two-outcome POVM Md with white noise), the maximal intrinsic randomness drops into a neat bound: Pguess* = 1/d (tr p M1)^2, where tr p M1 stands for the trace of the square root of M1. In other words, a symmetric, noisy basis readout carries a predictable ceiling for intrinsic randomness, again determined by the structure of the measurement operator. The authors complement this with a corollary that gives an eigenvalue-based upper bound for any two-outcome POVM, tying Pguess* to the spectral gaps of M1. The upshot is a robust toolbox for assessing randomness once the measurement’s spectral data is known.
Put simply: for the simplest nontrivial quantum system, there’s a clean recipe for how much private randomness a noisy measurement can generate, and that recipe extends in a principled way to more complex measurements. This is a meaningful step toward turning intrinsic, private quantum randomness into something you can certify inside a lab, even when the devices aren’t perfectly isolated from the world.
Noise, Dimension, and a Square-Root Recipe
A second centerpiece of the paper tightens the story in the most practically relevant setting: noisy projective measurements in any dimension. Here the authors consider Md = {Mx}x where each Mx is a depolarized projector, meaning a pure basis measurement smeared with white noise. Theorem 2 shows a strikingly simple result: the maximal guessing probability is
Pguess* = 1/d [tr p M1]^2, and, crucially, the optimal state for Alice to prepare is the unbiased state |ψ⟩ ∝ (1, 1, …, 1)T in the basis that diagonalizes Md. In other words, for this very natural noise model, the two ingredients that maximize randomness—uniformity of the input state and symmetry of the measurement—align perfectly in a way that makes the math tractable and the physics transparent.
To prove this, Curran and colleagues craft a constructive strategy they call a square-root decomposition. In broad strokes, it’s a way to realize the mixing that underwrites the noisy POVM as a genuine public decomposition into simpler measurements, with each part aligned to a clean projective element and weighted so Eve’s best adaptive attack remains as strong as possible. The square-root construction is not just a clever trick; it’s a lens that reveals why certain noise profiles preserve the most randomness and others dull it. In the qubit case, the same construction collapses into an intuitive geometric picture: Eve’s best attack uses rank-one projections aligned with a symmetric square-root of the noise, and the unbiased input state makes those projections maximally uninformative for her.
One striking corollary is that the maximal conditional von Neumann entropy and the maximal conditional max-entropy (two different ways to quantify randomness, especially relevant in the asymptotic and cryptographic regimes) are bounded by expressions that depend on the same Pguess* you get from the min-entropy perspective. In plain terms: the intrinsic randomness you can certify remains tightly linked to the practical, operational notion of Eve’s guessing power, across several standard entropy measures. That coherence across frameworks is comforting: it suggests the results aren’t an artifact of a single mathematical lens but a robust picture of randomness under realistic noise.
What This Means for Real-World Randomness Generators
These results land with significance for quantum random number generators (QRNGs), especially the device-dependent kind where the internal workings aren’t perfectly characterized. If you’re using a QRNG to seed cryptographic keys or random sampling in simulations, you want a guarantee that the numbers you’re pulling are privately random—that no external observer can predict them beyond what physics dictates. Curran and colleagues’ work provides a concrete, testable boundary for such guarantees in the very common scenario where measurements are imperfect and devices can’t be tuned to perfection.
In particular, the finding that a noisy projective measurement in any dimension can match the randomness you’d obtain from optimizing over all measurements on a noisy state is a conceptual win. It means you can reason about randomness production from the measurement side with the same confidence you’ve developed for state-based randomness in earlier work. And because real devices often implement a mix of states and measurements with ambient noise, the paper’s comparisons between the “single-noise” and “shared-noise” models are especially practical. The authors show that when both the state and the measurement are noisy, Eve can sometimes gain more advantage than when the noise is confined to one side. In a world where QRNGs are deployed for critical tasks, this is a reminder that security claims must account for how noise is distributed across the whole system, not just in isolation.
The work also speaks to a broader, perhaps humbler question: is randomness in the lab truly private? The answer here is nuanced. If Eve can access the same quantum side information that Alices’ devices hold (a realistic adversary in a device-dependent setting), the eavesdropper’s best attack can often be traced back to classical decompositions of the measurement. This is not a defeat for quantum randomness—it’s a map of what you need to guard against in practice: understanding how noise is structured, how the input states relate to the measurement basis, and how much information the environment might leak through shared degrees of freedom.
On the test bench, these results provide concrete targets. If you calibrate a QRNG by auditing the spectrum of the measurement operator and verify that the input state is unbiased to the measurement basis, you can push the system toward the regime where intrinsic randomness aligns with maximal min-entropy. If you want device-independent security, you’d need to go further, but for device-dependent QRNGs, the work writes down a clear, actionable boundary between what’s private and what’s not in the presence of realistic noise.
Coarse-Graining, Dimensionality, and the Limits of Leverage
The authors don’t stop at the two-outcome, two-level world. They also consider an idea that often comes up in quantum experiments: what if you simplify the measurement by coarse-graining several outcomes into just two? In a concrete sense, you might lump half the outcomes into one bin and the rest into another, turning a high-dimensional POVM into a qubit-like measurement. The question: does this coarse-graining destroy potential randomness, or could it somehow preserve or even enhance it?
The answer is nuanced but reassuring in a particular way. When you coarse-grain a d-dimensional noisy projective measurement down to two outcomes, the maximal intrinsic randomness you can certify cannot exceed the randomness of the corresponding two-outcome measurement on the original basis. In other words, coarse-graining does not magically conjure up more private randomness than the underlying qubit readout would allow. The paper shows that Eve’s optimal attack for the coarse-grained measurement can be mapped to an attack on the two-outcome submeasurement that saturates the bound, so there’s no hidden treasure hidden in the higher-dimensional structure. This is a comforting sanity check: the simplifications researchers often rely on for experimental practicality do not cheat the randomness budget.
At the same time, the authors illuminate a subtle lesson about Eve’s strategy in higher dimensions. When Alice’s system is truly d-dimensional, Eve might choose attacks that “inflate” the optimal qubit attack for a two-outcome submeasurement to fit the larger space. The effect is not a wholesale gain in private randomness, but a reminder that in the bigger world, the geometry of the measurement space and the way noise distributes across it can tilt Eve’s leverage in nontrivial ways. It’s a reminder that realistic security requires careful accounting of how a device’s internal degrees of freedom interact, not just an abstract, one-size-fits-all bound.
Noise, State, and the Subtleties of Shared vs. Single Noise
Perhaps the most striking and counterintuitive part of the paper lives in the comparison between “single noise” (noise confined to either the state or the measurement) and “shared noise” (noise spread across both). The authors build a clean scenario where Eve can optimize her attack when both the quantum state and the measurement carry noise. They show that, in the qubit case, if the measurement and state are both noisy, Eve can reach perfect guessing power at a certain critical noise level, effectively erasing intrinsic randomness. In contrast, if the noise is kept separate—say, a noisy state measured with a clean device—the intrinsic randomness can be preserved even when the measurement looks bland, and Eve’s best guess remains imperfect unless the noise is pushed to the extreme.
This finding isn’t just a neat theoretical curiosity; it matters for how we design and compare QRNGs. It tells us that how noise is distributed across a device matters for security. If you only model the measurement as the culprit for randomness loss, you might underestimate the vulnerability that comes from correlated noise across the state and the device. The paper’s analysis provides a framework for thinking about these correlations and for designing experiments that minimize Eve’s information by controlling not just one part of the system but the joint noise landscape.
The Institutions Behind the Insight and What It Leaves Open
The study is a collaboration anchored in ICFO—The Barcelona Institute of Science and Technology—and the University of Warsaw, with involvement from Quside Technologies S.L. and ICREA. The lead authors named in the work are Fionnuala Curran and Antonio Acín, with important contributions from Morteza Moradi, Gabriel Senno, Magdalena Stobinska, and colleagues. The institutional mix isn’t just a biographical footnote: it reflects a convergence of theory and experiment, of academic and industry perspectives, all aimed at a robust, testable understanding of quantum randomness under realistic conditions. The authors also openly acknowledge the broader context of work on the intrinsic randomness of quantum states and measurements, situating their results as a complement to, rather than a replacement for, earlier findings.
Beyond the concrete theorems and proofs, the work opens doors to a few inviting questions. How close are we to a universal, device-independent characterization of intrinsic randomness for arbitrary measurements? Can the square-root decomposition be extended or refined for even more general noise models? Do the upper bounds they derive on von Neumann and max-entropies saturate in broader classes of dilation strategies, or are there universal dilations that push entropy down further? And, for practitioners, how might these results guide the calibration of QRNGs in industrial settings where devices encounter drift, temperature variation, and imperfect isolation? The paper doesn’t just settle questions; it hands us a clean set of tools and a map for the terrain ahead.
In the end, Curran and coauthors give us a vivid reminder: randomness in the quantum world is not a monolith. It is a resource that survives or evaporates depending on the exact way a system is noisy, how a state and a measurement relate, and how much knowledge an observer might hold about the devices. They translate that subtle physics into precise, usable bounds, drawing a line from abstract theory to the practical security of quantum technologies. And in a field where new devices promise new capabilities, knowing how much randomness you can trust is not merely academic—it’s foundational for the quiet confidence with which we can deploy quantum tools in the daily world.
For researchers and curious readers alike, the takeaway is this: in a noisy quantum measuring world, the secret life of randomness is not a mystery to be wished away but a resource to be quantified, guarded, and understood. The study anchors that understanding in concrete mathematics, but it remains, at its heart, a human story about uncertainty, trust, and the intriguing way quantum physics lets us peek at what hides in the noise.