In the physics of many particles, energy and entropy often behave like faithful dancers who know their steps: scale with the number of dancers, N, in clean, predictable ways. In short-range systems, doubling the crowd doubles the energy, roughly speaking. But if the dancers are linked by long-range choreography, every dancer can feel every other dancer from across the floor. That is the conundrum at the heart of long-range interacting systems.
In a paper led by Michael Kastner from Stellenbosch University and the Hanse-Wissenschaftskolleg, the question is not whether long-range forces exist but how to describe them in a way that respects both the physics and the mathematics of large ensembles. Enter a controversial yet widely used trick: tampering with the Hamiltonian by introducing an N-dependent prefactor, a moral equivalent of adjusting the microphone gain so the chorus can be heard. This adjustment is known as the Kac prescription, and it changes how energy scales with N, making energy extensive even when the underlying forces are not.
The point of the article is not to worship this trick, but to give it a clear physical interpretation. The authors argue that, for finite systems, rescaling can be translated back into the original description; for infinite systems, the choice of scaling matters—greatly—because it can create or erase the physics we care about, like phase transitions, or the timing of relaxation in quantum spins. The goal is to map the math onto reality, and to reveal when the rescaling is a harmless bookkeeping and when it becomes a map that leads us away from what we actually observe in the lab.
Why energy becomes non-extensive
To see the problem, imagine a gas where every particle pulls on every other with gravity-like strength. If you confine such a cloud in a box and pile up more and more particles, the total energy grows faster than the number of particles. In a three dimensional world, the gravitational energy of a uniform cloud scales roughly like N5/3. That is the textbook signature of non-extensivity: energy does not grow linearly with N.
More generally, long-range interactions—potentials that decay slower than the distance to the power D in D dimensions—continue to couple distant parts of the system. When every pair of particles can influence each other substantially, summing up all those pairwise energies yields a total that does not scale like N, but like Nq with q>1. This is the mathematical reason why standard thermodynamics, built around the idea that energy is extensive, runs into trouble.
To salvage the toolbox of thermodynamics, researchers have proposed the Kac prescription: multiply the interaction term by an N-dependent factor so the energy becomes extensive again. One version is the 1/N scaling for all-to-all interactions. The trick is not a physical nudging of nature but a mathematical device that allows us to apply the standard machinery to long-range systems, at least in controlled limits. The question Kastner and coauthors ask is what this device means physically and when it actually helps us understand real systems.
Enforcing extensivity with a scaling factor
A concrete example is the all-to-all Ising model, where each spin interacts equally with every other spin. Without rescaling, the energy scales like N2; with an N-dependent prefactor, the energy scales linearly with N, restoring extensivity. The mathematical relation is elegant: you can compute the partition function with respect to the rescaled Hamiltonian and then translate the results back to the original model by rescaling temperature and fields. In other words, calculations done with the rescaled system are not cheating—they can be mapped back to the original physics, provided you keep track of the rescalings.
This equivalence holds for finite systems, which means that, for a fixed N, you can work with the scaled Hamiltonian and still say something meaningful about the unscaled one. But the trick becomes delicate as N grows without bound. The very quantities that define a thermodynamic limit—the free energy per particle, the specific heat, the order parameters of phase transitions—can behave differently depending on whether you applied the rescaling before taking the limit.
The paper argues that you have to be careful about what you keep fixed as you run the limit. If you want to preserve the “competition” between energy and entropy that gives rise to phase transitions, you should let both the energy and the entropy scale extensively. If you instead redefine the free energy with a different prefactor, you risk washing out the interesting physics. This is not an abstract worry: the way you treat the long-range tail of the interaction determines whether a phase transition is visible in a finite system and whether the thermodynamic limit captures a meaningful story about large but finite samples.
Finite systems and the meaning of the thermodynamic limit
The authors illustrate the point with a striking figure: the specific heat of the all-to-all Ising model, computed with the original unscaled Hamiltonian and with the scaled one. When you don’t scale, the peaks in the specific heat drift to higher and higher temperatures as N grows, eventually leaving the finite-temperature world behind in the thermodynamic limit. The scaled version, by contrast, shows peaks that lock into a stable location as N grows, giving a nontrivial, well-defined limit. It is here that the Kac prescription starts to justify itself as a tool for understanding the physics of large systems.
But there is a twist. Kastner does not claim the scaling recipe is universally correct. In some dynamical settings, enforcing extensivity can suppress the very long-range phenomena researchers want to study, such as slow relaxation or prethermalization. In a quantum spin chain with long-range interactions, using an alternative N-dependent scaling that does not enforce extensivity can yield a meaningful, nontrivial limit for the time evolution of observables. The upshot is subtle: the right scaling is context-dependent, depending on whether you care about equilibrium properties or dynamical evolution, and on what aspect of the system you seek to understand.
What this means for real worlds of physics
The upshot is practical: long-range systems are not pathological curiosities but windows into how collective behavior emerges when many components feel each other’s presence across space. If you want to use the standard thermodynamic toolkit, you often need that N-dependent rescaling to keep the math honest and the physics visible. But the rescaling is not a universal cure; it interacts with the microscopic details of the model and with what you measure in the lab. Real experiments involve finite, sometimes small, systems where finite-size effects can dominate. The rescaling helps us predict plausible trends as systems scale up, but it can also mislead if applied blindly to dynamical questions or to phases that hinge on the competition between energy and entropy.
The paper does not merely argue for or against a tool. It offers a principled way to interpret what the Kac prescription is doing: it is, in essence, a lens that lets us compare apples with apples, provided we remember that the apples are sometimes being measured at different temperatures or with different fields after rescaling. By keeping track of how β (the inverse temperature) and h (the magnetic field) transform under the rescaling, we can translate between the scaled world and the original one. The result is a clearer map from mathematical constructs to physical intuition, a map that helps experimentalists and theorists talk about large, long-range systems with less risk of misinterpretation.
Where this might touch the real world
Gravitational systems, plasma, and certain quantum simulators mimic long-range interactions in the lab and in the cosmos. The careful accounting of energy scaling matters for predicting how such systems behave as they grow or compress. In physics education and in the design of experiments with trapped ions or ultracold atoms, thinking in terms of extensivity and rescaling helps set expectations for when a system will show a phase transition or a dramatic change in its dynamical behavior. It also reminds us that some questions, like how quickly a system forgets its initial state, may depend as much on how we measure time as on how the particles interact.
The author credits the work to the Stellenbosch University and the Hanse-Wissenschaftskolleg, with the guiding hand of Michael Kastner. It’s a reminder that the frontier of long-range physics is not a single discovery but a conversation—between ideas, between finite experiments and infinite limits, between math and matter. And that conversation, like any long-range dance, requires both restraint and imagination to keep the steps honest.