Quantum states are famously shy about being described in plain language. They hide in amplitudes, interfere in strange ways, and defy naive intuition. Yet scientists keep asking a practical question: if you give a classical description of a quantum state, how much classical information do you need to recover just the numbers that matter for a particular measurement? In other words, how much memory must a classical computer carry to predict what an observer will see when you present the system with a measurement?
The paper by Kaushik Sankar from the University of California, Davis, tackles this question head-on, connecting a long lineage of ideas from Raz’s foundational work to the modern classical shadows framework used in quantum learning. Sankar asks not about whether we can cheat memory by clever encoding, but about the fundamental limits: what is the true compression ceiling when the goal is a precise relative error in the estimate ⟨ψ|M|ψ⟩ for Hermitian M with operator norm at most 1?
Why should this matter outside the ivory tower? Because the quantum devices of today—noisy intermediate-scale machines, or NISQ devices—are limited by both time and memory. If we want to learn about a quantum state by performing measurements and then summarizing results in classical memory, these limits determine what’s even theoretically possible. Sankar’s results lay down a yardstick: if you want relative-error reliability across a broad family of measurements, there’s a nontrivial, exponential-scale price to pay in classical memory. It’s a reminder that memory is not a cosmetic concern; it is a first-order constraint on how we extract knowledge from quantum systems.
To those who’ve watched the field scramble for smaller and smarter shadows of a quantum state, this paper lands like a sober counterweight: compression isn’t free, and the kind of error you demand matters. The author, Kaushik Sankar, hailing from the Department of Computer Science at the University of California, Davis, situates the work in a lineage that includes Raz’s 1999 discoveries and Gosset and Smolin’s 2019 lower bounds, then threads the needle toward classical shadows. The result is a story that blends deep theory with a practical instinct for what quantum-learning pipelines can and cannot do with classical memory.
In the end, the message is surprisingly pragmatic: if your mission is to learn a wide array of quantum properties with precise relative accuracy, you’ll need a memory budget that grows with the size of the quantum system. The numbers aren’t just cute folklore; they map a hard ceiling on a class of classical descriptions. The paper makes that ceiling explicit, and it does so in a way that stays readable even when you’re not steeped in the minutiae of Gap-Hamming Distance reductions. It’s a reminder that, in quantum learning, some doors don’t just open wider with clever encoding—they require more bricks in the wall.
Lower bounds for one-way classical compression with relative error
At the heart of Sankar’s result is a clean, stoic bound: to estimate the expectation value ⟨ψ|M|ψ⟩ with a relative error ε, the one-way randomized classical communication from the holder of the quantum description to the observer of the observable must be at least on the order of 2^{n/2} ε^{-2} bits, where n is the number of qubits. In other words, the classical message must contain a substantial chunk of information—roughly the square root of the quantum state’s dimension, scaled by the inverse square of the tolerance. This is not a small tweak; it’s a fundamental scaling that can’t be sidestepped by cramming a little smarter encoding into the bits.
The beauty (and bite) of the result is that it holds not only for the entire universe of observables but also when you constrain M to the Pauli family, the workhorse observables in quantum experiments. That means the barrier is robust across a very natural, practically relevant subset of measurements. The implication is stark: even a carefully chosen, widely used set of measurements does not buy you a shortcut to compression when you demand relative accuracy instead of a fixed absolute error.
To prove this, Sankar wields a classic weapon from the theory-lab: a reduction from Gap-Hamming Distance (GHD). The intuition is that if you could compress ψ to fewer bits and still recover ⟨ψ|M|ψ⟩ with the required relative accuracy, you would be able to solve certain GHD instances more efficiently than is possible under current complexity barriers. The reduction translates a problematic instance of a distance problem into a question about quantum-state compression, turning a combinatorial puzzle into a statement about how much classical memory is truly needed to capture quantum expectations. In effect, the paper shows that the relative-error task inherits the hardness of a well-studied, hard problem in communication complexity.
There’s a fair amount of algebra behind the scenes, but the moral sticks: when your goal is to preserve a value’s proportional accuracy, there is a fundamental memory price. The lower bound is not merely asymptotic flourish; it marks a concrete barrier that grows with the number of qubits and shrinks only with the square of the desired relative error. And because the same bound persists when the observables are restricted to Pauli operators, the barrier remains palpable even in the most widely used experimental toolkit. This isn’t a niche result about a rare mathematical construction; it’s a statement about the physics of information: some quantum properties resist succinct classical summaries unless you loosen the fidelity criterion.
The upshot is a tight narrative about the relationship between information, measurement, and compression. The one-way, classical-communication model—Alice sends a classical description of the quantum state, Bob holds the observable—serves as a crisp, extreme proxy for what would happen if you tried to compress a quantum system into classical memory and later recover meaningful properties. The relative-error version is the version that mirrors real-world expectations more closely: you don’t want to know the result within a fixed kindness of error; you want the error to scale with the magnitude of the quantity you’re estimating. That is, you want a faithful proportional error bound, not just a small absolute margin. And Sankar shows that this natural, practically reasonable goal comes with a sturdy overhead in the classical memory budget.
Pauli observables and a robust obstruction
A standout feature of the work is the demonstration that the relative-error lower bound survives even when you shrink the set of observables to the Pauli operators. Pauli observables are the workhorses of quantum information, and showing a hard ceiling even in their company makes the result feel less like an odd corner case and more like a structural truth. Sankar accomplishes this by a careful construction that invokes a matrix built from diagonals of Pauli-like operators and a clever linear-algebra fact: a certain Diag(Pn) matrix, formed from rows corresponding to tensor products of I and Z, is invertible and behaves like an orthogonal change of basis on the Pauli space.
In practical terms, imagine mapping the problem of distinguishing two nearby patterns of Pauli contributions to a single number—the squared norm of a combination of two binary vectors. The author then shows how this quantity can be fed into a measurement, and how the resulting value encodes which index in a secret test was chosen. The art here is lining up the algebra so that a single estimated quantity carries enough information to solve an indexing problem that is known to be hard. The upshot is a clean, explicit lower bound that does not evaporate when you restrict the measurement toolbox to Pauli strings.
Why does this matter? The Pauli family is a kind of lingua franca for quantum experiments and simulations. Observables built from Pauli strings underpin quantum state tomography, error mitigation strategies, and a large share of learning protocols in the field. The fact that the same Ω(2^{n/2} ε^{-2}) bound applies here means the memory cost of achieving relative-error accuracy is not something you can hope to dodge by choosing a smaller alphabet of measurements. It’s a sign that the information geometry of quantum-state space, and how classical information can mirror it, imposes a universal constraint—even when the measurements are the simplest, most common tools around.
From lower bounds to shadows: what this changes in learning quantum states
The bridge to classical shadows—an approach that uses random measurements to build a compact “shadow” of a quantum state from which many properties can be estimated—turns out to be a two-way street. Sankar shows that if you could solve the ϵ-relative-error Classical Shadows Task for a given family of observables with small classical memory, you could also solve the ϵ-relative-error ⟨M⟩ problem with the same memory footprint. In short: lower bounds for the pure mathematical task translate directly into lower bounds on the space required to store shadows that aim for relative accuracy.
That leads to a striking conclusion: relative-error shadow tomography cannot be pinned down with poly(n) classical memory when you want uniform, relative accuracy across a broad class of observables. The paper makes this precise by showing that you still need about on the order of 2^{n/2} × ε^{-2} bits of memory, even when you allow sophisticated nonadaptive measurement schemes and memory of the measurement results. In the world of shadows, where the gauge of success has long been “how many samples” and “how much time,” Sankar’s result flips the script: memory has to scale with the problem’s dimensionality if the metric is relative error.
To put it more plainly: the memory you need to capture the essential features of a quantum state grows dramatically when you demand a relative notion of accuracy, even for the Pauli-sparse regimes that are favored in practice. And the gap isn’t just in theory. It mirrors a broader story in quantum learning and information: quantum memory can outperform classical memory in certain learning tasks, but when the job is to compress and reconstruct high-precision expectations from a fixed classical description, the barrier becomes memory itself, not mere clever encoding.
There is a reassuring, if sobering, contrast to additive-error results: for Pauli-observable shadows, poly(n) memory schemes exist in the additive regime, thanks to prior advances. Sankar’s work shows that once you insist on relative error, the polynomial savings disappear. In other words, the benefits of clever shadow tricks have a price tag when the error criterion tightens. This invites further exploration: can adaptivity, quantum memory, or hybrid quantum-classical schemes push the boundary further, or is the relative-error barrier a fundamental wall for nonadaptive, memory-limited approaches?
Beyond the math, there’s a practical takeaway for researchers building quantum-learning pipelines today. If the goal is broad, reliable inference about observables with sharp relative accuracy, then whether you measure a Pauli string or a more general operator, the classical side of the equation demands substantial storage. That doesn’t doom the enterprise; it reframes it: progress will come from smarter hybrids—where some memory is stored quantumly, or where adaptivity and structure in the observables reduce the classical footprint—rather than chasing a purely classical, memory-light shadow tomography.
In framing these results, Sankar also connects to a tapestry of prior work. The paper sits in dialogue with Raz’s vector-in-subspace problems, Gosset and Smolin’s additive bounds, and the burgeoning literature on shadow tomography and efficient learning of quantum systems. It’s in that conversation that the numbers acquire meaning: a true ceiling on compression, a reason the classical-quantum divide remains stubborn, and a roadmap for where future work might navigate the tradeoffs between memory, time, and the desire for reliable quantum insights.
Note: The study is led by Kaushik Sankar, affiliated with the University of California, Davis, and builds on decades of foundational work in quantum information and communication complexity. Its insights illuminate not just a corner of theory but a broad crossroad where physics, computer science, and practical quantum experimentation meet.