A Square Packing Riddle That Teaches Infinity

The unit square is a stage for a deceptively simple game that mathematicians have been playing for decades: how many squares can you cram into a square, and what does the best you can do say about the geometry of packing itself? The question isn’t about making a jigsaw puzzle with a satisfying snap. It’s about a clean, crisp quantity: the maximum total length of the sides of n non-overlapping little squares that fit inside a unit square, denoted f(n). If you know calculus and a little trickery, you know there’s a quick bound that tells you something strong about any n that’s a perfect square: f(n^2) = n. But once you move beyond perfect squares, the puzzle gets thornier, and the famous Erdős conjecture steps into view: is f(n^2 + 1) always equal to n as well? The short version is that this conjecture remains open, a tidy hope that has resisted every attempt to pin it down with a general proof. The long version is a doorway to a surprising equivalence that folds this stubborn geometric question into the behavior of an infinite series.

The paper you’re reading about—authored by Anshul Raj Singh and posted to arXiv—seeks not to settle the packing problem with a single clever construction, but to reveal a deeper structure that binds the geometry of how we place squares to the world of infinite sums. The author openly notes that the real hinge is an equivalence: the conjecture f(n^2 + 1) = n for all n is true if and only if a certain series converges. In other words, you don’t just search for a sneaky packing; you study how the tiny gaps between what you can achieve for k^2 + 1 squares accumulate across all k. If those gaps add up to a finite amount, the conjecture holds everywhere; if they don’t, the conjecture falters somewhere along the line. It’s a bridge between a clean, finite puzzle and the unsettling murmur of infinity. The paper is by Anshul Raj Singh, and the arXiv entry presents the mathematical argument, but the available excerpt here does not specify a university affiliation. What matters for the reader is the idea: a geometric conjecture and a convergent series are the same coin, just viewed from different angles.

A quiet equivalence that reshapes the problem

At first glance, packing problems feel tactile—tiny squares playing Tetris with a bounded board. Yet the author formalizes a precise statistic: f(n) is the largest possible sum of the side lengths of n non-overlapping squares that fit inside a unit square. The leap from intuition to proof rests on a simple, almost austere observation: for any n, an inequality derived from the Cauchy–Schwarz inequality shows that f(n^2) = n. That’s a clean, if somewhat magical, equality—you can place n rows of n tiny squares, and you can’t do better than that in total length when the count is a perfect square. The leap Erdős proposed, though, is to the near-miss case: what happens when you add one more square? f(n^2 + 1) is at least n, but can it ever exceed n? Erdős conjectured no, that f(n^2 + 1) still equals n for every n. That is the heart of the conjecture, a boundary question about the geometry of space and the arithmetic of counts.

Singh’s central move is to encode the “gap” between what you can achieve and the elegant bound n into a single function, epsilon(k) = f(k^2 + 1) − k. This is the amount by which you fall short of the neat bound when you pack k^2 + 1 squares. The punchline is elegant and a little stepwise: the conjecture f(k^2 + 1) = k for all k holds if and only if the infinite sum of those gaps, sum_{k≥1} epsilon(k), converges. And there’s a second, perhaps more surprising, line of consequence: if epsilon(n) is zero for infinitely many n, then it must be zero for all n. In other words, a scattered set of perfect little packings implies a completely rigid global rule. The idea that a global truth about all n can be sealed by the convergence or non-convergence of an infinite series is not just cute algebraic bookkeeping—it’s a structural insight about how local optimality (for each k^2 + 1) can constrain global behavior across all scales.

The two theorems that turn the problem into a tonal instrument

The first major result is crisp and almost counterintuitive in its simplicity. If epsilon(k) = 0 for infinitely many k, then epsilon(k) must be 0 for all k. In the language of the paper, nontrivial zeros of the gap recur with a rigidity that forces the entire sequence to vanish. It’s the kind of statement that makes you pause and re-check: how can a property that seems to be merely about select integers cascade into a universal rule? The answer lies in a careful combinatorial estimate, a kind of balance sheet for packing that compares a grid-based construction with the maximum total side length achievable under two sub-scenarios, then uses a clever inequality to propagate a local equality into a global one. The proof uses a lemma that stitches two supposedly independent packings into a larger one on a bigger grid and shows that certain sums must obey a bound. If at infinitely many k the bound is tight, the same tightness must propagate to all k because otherwise you would be able to push the total further than allowed by the grid-algorithm argument. It’s a classic move in discrete geometry: you constrain the local to control the global by enforcing a conservation law of sorts across scales.

The second theorem sharpens the intuition in the other direction. If there exists some N for which f(N^2 + 1) exceeds N, i.e., epsilon(N) > 0, then epsilon(k) cannot decay too quickly as k grows. In fact, epsilon(k) must be at least on the order of 1/k for all k. This is a telling fingerprint: a single counterexample at one scale would echo through every larger scale, forcing a non-negligible accumulation of gaps. The intuition here is that once you can do better at one scale, you can rescale and replicate that advantage to build a chain of packings whose benefits persist, and when you translate those benefits into the language of epsilon, you’re forced into a lower bound that decays only as 1/k. The corollary ties the two theorems into a neat equivalence: the global conjecture holds if and only if the sum of all gaps converges. If the gaps don’t converge in total, the conjecture fails somewhere along the line.

Why this link to an infinite series matters

The move to tie a finite, geometric question to the convergence of a series might feel like a clever trick, but it is a meaningful reframing. In many areas of mathematics, the truth of a complex global statement lives not in a single object you construct, but in the behavior of a whole family of objects as you scale up. Here, the family is the sequence of packings for k^2 + 1 squares, and the object you study is the cumulative deficit epsilon(k). The authors show that tracking how large those deficits can be, on average, across all k, is enough to decide the outcome for every single k. If the deficits pile up to a finite total, the conjecture survives. If they don’t, the conjecture is doomed, broken by the cumulative geometry of countless little packings.

To connect geometry to a familiar analytic entity, the paper uses a classic result due to Halász from 1984, which gives a lower bound for f(k^2 + 2c + 1) in terms of k and c. This connects the “near miss” cases to a baseline growth rate, ensuring that the series arithmetic isn’t playing in a vacuum. The Halász bound acts like a floor, guaranteeing that as you tilt the packing problem in slightly different directions (adding a constant 2c + 1 to k^2), you retain a quantifiable amount of structure. That structure is what lets Singh push the argument from local observations in a few special n to a universal claim about all n. In other words, a robust backbone exists for how f behaves near perfect squares; the series convergence then decides whether that backbone can extend unbroken all the way to every n.

What this means for the Erdős conjecture—and for how math grows up

The Erdős packing conjecture has a long pedigree. Paul Erdős was famous for posing questions that look deceptively simple but open up into huge landscapes of math, often connecting geometry, combinatorics, and number theory in surprising ways. The conjecture sits at the confluence of a very concrete packing problem and a more abstract question about how often a near-optimal arrangement can fail. Singh’s result reframes that near miss as a question about the fate of a particular infinite series. It’s the mathematical equivalent of turning a stubbornly hard puzzle into a ritual: if you can show the corresponding series converges, you’ve won the game globally; if you can demonstrate divergence, you’ve found a counterexample somewhere along the thread.

What’s exciting about this is not merely the clever equivalence, but the new pathway it opens for attack. Previously, attempts to prove Erdős’s conjecture could feel like hunting for a single needle in a haystack—construct a packing and pray it extends the bound, or craft a clever obstruction and hope it generalizes. Now, researchers have a diagnostic tool: examine the series of gaps epsilon(k). If you can prove, for example, that this series must converge under all plausible packing constraints, you could establish the conjecture for all n. Conversely, a counterexample would produce a tangible, testable lower bound on epsilon(k) that contradicts convergence. In effect, the problem migrates from “Can we craft a universal packing argument?” to “Does the sum of small defects vanish or accumulate?” The shift is more than cosmetic; it reframes a stubborn boundary in a language that invites new techniques—analytic estimates, asymptotics, and perhaps even computational experiments that probe how f behaves for larger and larger k.

The practical upshot: a blueprint for futures problems

Beyond its intrinsic beauty, Singh’s equivalence offers a blueprint for how to tackle other stubborn questions in geometry and combinatorics. When a discrete maximization problem resists a direct construction, it may still yield to an indirect lens: look at the sequence of best you can do as you vary the problem’s parameters, package that as a series of small deficits, and ask whether those deficits accumulate. If they do so in a controlled way, perhaps the global truth follows. If they don’t, a breakdown occurs in a predictable manner. The strategy echoes a broader trend in modern mathematics: turn a hard, local optimization problem into a global, asymptotic one, and let powerful tools from analysis, probability, or number theory do the heavy lifting.

Singh’s narrative is careful not to pretend the proof solves Erdős’s conjecture. It does something more subtle and, in many ways, more powerful: it reveals an exact equivalence that turns the conjecture into a question about convergence, a question that invites a different set of tools and a different culture of collaboration. For curious minds, this is a reminder that progress in mathematics often comes not from a single “aha” moment but from folding a problem into a different framework where new ideas can breathe. If a future artist of math can either show that the gap series must accumulate without bound or demonstrate that it must always stay finite, they won’t just move one theorem forward; they will tilt the entire landscape of geometric packing and its connection to analysis.

As readers, we’re left with a sense of how a puzzle about tiny squares relates to the heartbeat of infinity. The unit square becomes a model for how local constraints propagate globally, and a purely geometric question becomes a question about the behavior of an infinite sequence. It’s not a metaphor so much as a translation: a problem about space and arrangement is recast as a study of a series’s fate under the gaze of convergence. If you love the elegance of mathematics, that’s the kind of transformation you live for—a moment when a stubborn wall reveals a hidden door, and the door opens onto a corridor that might lead to a solution, or at least to a richer understanding of what a solution could look like.

Given the depth of this equivalence, the road ahead feels both clearer and more open-ended. The natural next steps will likely involve pushing numerical experiments further to inspect how often f(k^2 + 1) equals k in practice and to test how the gaps epsilon(k) accumulate as k grows. The challenge will be to translate those empirical observations into rigorous bounds that either force convergence or demonstrate divergence. And if a future breakthrough finds a k for which f(k^2 + 1) > k, the guarantee of epsilon(k) = Omega(1/k) would become a lighthouse for exploring the explosion of deficits across all larger k. Either way, the journey from a packing puzzle to a convergent series has given the Erdős conjecture a vitality it perhaps didn’t have before: a road map, a set of signposts, and a reminder that in mathematics, as in life, the tiniest corner can illuminate the whole structure.

In the end, the study’s real achievement may be less about pinning down f(n^2 + 1) today and more about showing us how to think about difficult problems tomorrow. It invites us to see geometry as a bridge to analysis, to treat a geometric deficit as a signal rather than a failure, and to recognize that infinity often speaks through the quiet arithmetic of a series. If that’s not a productive way to engage with math, what is?

Institutional note: The author of the paper is Anshul Raj Singh. The available abstract and excerpt do not specify a particular university affiliation. The essential takeaway, however, is not the author’s institutional home but the bridge he builds between Erdős’s square packing conjecture and the convergence of an infinite series, a bridge that could carry future work in unexpected directions.