A double-jump reshapes Erdős unit-sum puzzle in plane

In 1945, the mathematician Paul Erdős nudged a seemingly simple geometric question into the spotlight: if you take n unit vectors in the plane and flip each one’s direction with a fair coin, how often does the resulting sum land inside the unit disk? It’s a question about randomness, balance, and how far a collection of tiny pushes can drift on a two-dimensional stage. The reverse version of that question—how small can the chance be, if you’re free to choose those directions?—is what Hollom, Portier, and Souza explore in their new work from the University of Cambridge. What they find is delightfully counterintuitive: the probability landscape can exhibit a dramatic, double-jump behavior right at the radius 1, and the parity of n (whether n is odd or even) matters in surprising ways.

To set the scene, imagine you’re playing with a swarm of arrows spread on a circle, each arrow length 1. You randomly flip signs and sum them up. Sometimes the result clusters near the origin; other times it skitters away, almost never landing near zero. Behold the reverse Littlewood–Offord problem: instead of asking how often you land inside a ball when the vectors are fixed, ask how small you can push that probability by choosing the directions cleverly. The authors push this question into new territory, unearthing a rich tapestry of phase transitions that depend on parity, dimension, and the exact radius you care about.

Crucially, the work is a product of Cambridge’s mathematical community—the paper’s authors are L. Hollom, J. Portier, and V. Souza—who build on a lineage that traces back to the mid-20th century Littlewood–Offord theory, weaving in modern probabilistic tools and intricate combinatorial constructions. Their results illuminate not just a single question, but a family of behaviors that reveal how delicate balance can be in high-dimensional random sums—and how a single step on the dial (the radius) can flip the outcome from virtually impossible to almost inevitable, or vice versa.

The reverse problem and the two big moves you need to know

The core object is simple enough to name. Let v1, …, vn be unit vectors in the plane, and let εi be independent Rademacher variables that take values ±1 with equal probability. Define the random sum σV = ε1 v1 + … + εn vn. For a fixed radius r, one asks: what is the smallest possible probability, over all choices of v1, …, vn, that ∥σV∥2 ≤ r? Denote that infimum by F2,r(n). It’s a reverse problem: instead of bounding the probability from above (the traditional Littlewood–Offord questions), we seek the worst-case lower bound by constructing the vectors with the most cancellation possible.

Two technical springs power the analysis. The first is a pairing technique borrowed and refined from prior work: by pairing up vectors in a way that their angle gaps don’t add too much to the sum’s length, one can guarantee a baseline probability of staying inside a chosen radius. The second is a careful use of vector balancing results—most notably Swanepoel’s theorem, which says that for an odd number of unit vectors in the plane there exists a sign pattern that keeps the sum inside the unit ball. These tools let the authors navigate between two extreme statements: a guaranteed lower bound on the probability for radii just above 1, and explicit constructions showing how small the probability can get exactly at radius 1.

One guiding heartbeat of the paper is the discovery of a double-jump phenomenon. At radius 1, the probability can plunge to as small as a polynomial decay in n, specifically on the order of n−3/2 for some odd n constructions. But as soon as you nudge the radius just above 1, the probability rebounds to at least a constant times 1/n (for any fixed positive δ, the radius 1+δ yields a lower bound on the probability on the order of cδ/n). In other words, the system behaves as if there are two distinct thresholds glued together at the knife edge r = 1: at exactly radius 1, the sum’s staying power is feeble, yet just a hair larger than 1 grants it enough cushion to lift the probability back up significantly. It’s a surprisingly abrupt shift, reminiscent of how a small change in the edge of a random graph can flip the presence of a giant component, a parallel the authors themselves note in guiding intuition.

The odd-even split and what it means for the unit disk

The odd-even split—the way parity of n changes the game—appears across the main results. For even n, earlier work (Beck’s theorem) ensures a concentration phenomenon inside radius √2: with high probability, the random signed sum stays within radius √2, and the lower bound scales like n−d/2 in d dimensions. In the planar case, the radius √2 is the natural, parity-blind ceiling where mass can concentrate. The authors’ first punch is to show that when n is odd, the original Erdős conjecture does not hold even at radius 1: there exist configurations where the probability that ∥σV∥ ≤ 1 is on the order of n−3/2, far smaller than the conjectured c/n. That is the disproof of Erdős’s original claim in the odd case, thwarting the last vestige of universality in the problem.

But parity isn’t just a nuisance to be sidestepped. It is a structural hinge that lets the authors push a stronger, approximate bound: for any δ > 0, there is a cδ > 0 such that for odd n and any unit vectors, P(∥ε1 v1 + … + εn vn∥ ≤ 1 + δ) ≥ cδ/n. This is the “approximate Erdős conjecture” that survives the test of odd n and gives a robust baseline for what happens just past the threshold radius. The language of the proof leans on pairing arguments and a deeper vector-balancing theorem, but the message lands plainly: near radius 1, the random sums cannot be stamped out completely; there’s a universal floor that scales like 1/n when you permit a tiny slack above 1.

The paper also sharpens the picture of how often there are many good signings that keep the sum small. In a striking refinement, the authors show that if n is odd and the vectors lie in the plane, there are exponentially many signings that keep the sum inside 1. Concretely, the probability of the event ∥ε1 v1 + … + εn vn∥ ≤ 1 is at least about (0.525)^n up to a constant factor. That’s a dramatic improvement over the naïve bound of 2−n you might get from a single pairing argument, revealing a hidden abundance of cancellation when you look hard enough. This exponential growth in the number of successful signings is a vivid reminder that randomness can be tamed in surprisingly structured, high-dimensional ways.

Higher dimensions, more landscapes

The plane isn’t the whole story. When you lift the problem to higher dimensions, the geometry becomes tougher and more varied. The authors prove a general version: for any dimension d, there is a constant Cd so that if you pick n unit vectors in Sd−1 with a parity condition n ̸≡ d (mod 2), there exist sign patterns that push the probability of staying below radius √d−1 down to about n−(d+1)/2. The phenomenon familiar from the plane — a sort of parity-driven threshold — persists, but the details shift with dimension.

Beyond these general bounds, the authors dig into constructions that minimize P(∥σV∥ ≤ √d). In two dimensions, a simplicial construction (using the vertices of a regular simplex) can beat the orthogonal construction that often conjectured as optimal. In higher dimensions, the story flips again: orthogonal-type configurations can outperform simplicial ones for sufficiently large d, but not universally. The authors even build mixed constructions that blend a simplex with an orthogonal frame and outperform any purely orthogonal approach in d ≥ 3. The upshot is striking: there isn’t a one-size-fits-all best way to scatter unit vectors to suppress the unit-ball probability; the best recipe depends on dimension, parity, and the precise target radius.

Taken together, these results sketch a landscape where the “best” vector configurations are not fixed templates but adaptable strategies that hinge on delicate arithmetic and geometric tradeoffs. The analysis threads through a web of combinatorial identities, asymptotic estimates for binomial sums, and intricate geometric constructions, culminating in a richer map of how random signs sculpt the plane (and beyond).

Why these questions matter beyond the chalkboard

The Littlewood–Offord family of problems sits at the crossroads of probability, combinatorics, and geometric discrepancy. The work here fits squarely into the reverse and anti-concentration camps, where researchers ask not just how big or small a probability is, but how small it can be under worst-case configurations. This is more than a mathematical curio: understanding how and when sums of random vectors cluster—or refuse to cluster—helps illuminate the behavior of random walks on high-dimensional spaces, the conditioning of random matrices, and the design of deterministic constructions that stubbornly resist random fluctuations.

A particularly human takeaway is the sense that randomness isn’t a monolith. Under the right constraints, the same random process can behave wildly differently depending on seemingly minor choices — parity, the ambient dimension, or whether you measure a disk of radius 1 or 1 plus a whisper of slack. The double-jump phenomenon is a dramatic example: a boundary so sharp that crossing it feels almost cinematic. And yet the work also shows that structure abounds inside randomness. The pairing ideas, the Swanepoel vector balancing, and the negative results for naive conjectures all reveal that the plane can, with clever steering, be coaxed into behaving in surprisingly predictable ways—even when a random sign is involved at every turn.

For researchers and theorists, the paper is also a map of open routes. The exact constants and thresholds in higher dimensions remain delicate, and several natural questions linger about the optimal constructions for all parity scenarios and all radii. The authors themselves sketch a path forward, pointing to refined vector balancing questions that connect to a broader family of discrepancy and geometry-of-numbers problems. If you’re the kind of reader who enjoys the thrill of a frontier, this is a paper that invites you to chase a line of mathematical inquiry from the plane to the far reaches of higher-dimensional geometry.

The work behind these ideas is anchored in real-world institutions and the collaborative spirit of modern mathematics. The study was produced by researchers at the University of Cambridge—L. Hollom, J. Portier, and V. Souza—who bring together deep probabilistic insight and geometric intuition. Their results stand as a vivid reminder that even in the seemingly dry realm of signed sums and unit vectors, the architecture of randomness can be sculpted with elegance, creativity, and a touch of audacity.

In the end, the double-jump is more than a curiosity about a boundary. It is a window into how balance emerges, how thresholds organize chaos, and how a community of ideas can reveal that even a problem born in the 1940s still has surprises left to tell.