Five Candidates Could Make Majority Fairer Without a Single Winner.

The Condorcet paradox is that the best-loved idea of a clear winner can crumble the moment people start ranking preferences. A majority-rocking cycle can leave you with no single candidate who beats everyone else in head-to-head matchups. In practice, that’s not just a quirk of theory—it’s a stubborn obstacle in real-world decisions, from political elections to grant committees and paper selections. A team of researchers from Purdue University and Melbourne Business School has reframed this challenge by asking a deceptively simple question: what if we stop obsessing over one winner and instead curate a small, carefully chosen slate that the majority can actually support? Their answer, formalized in the notion of (t, α)-undominated sets, isn’t a shortcut around the paradox so much as a tunable way to balance depth of preference with breadth of agreement.

Led by Haoyu Song, Th`anh Nguyen, and Young-San Lin, the work comes out of Purdue’s Department of Computer Science and the Daniels School of Business, with collaboration from Melbourne Business School. The paper shows three big ideas. First, you can always assemble a small set of candidates, of size about t/α, that satisfies a robust, aggregate sense of fairness for any t and α. Second, as you push t higher, the minimum necessary size slides toward t/α, and that bound is asymptotically tight. Third, in the very special case t = 1, you can do notably better than previously known—finding a Condorcet-winning set of five candidates, improving earlier six-candidate bounds. In short: this isn’t just a marginal tweak; it’s a framework that could reshape how we think about collective choice when no single winner commands a majority.

A fresh frame for choosing more than one winner

Traditional voting theory often treats the outcome as a single sovereign choice. But the Condorcet paradox—where A beats B, B beats C, and C beats A in a cycle—suddenly makes a lone winner feel brittle. Elkind and colleagues proposed Condorcet-winning sets: pockets of candidates such that no outsider is preferred by a majority to all members of the set. The Purdue–Melbourne team goes a step further by loosening and generalizing the comparison between an outsider and a candidate set. They define a (t, α)-undominated set: a subset C of candidates with at least t members, such that for every outsider a, no more than an α-fraction of voters would rank a above every t-th-best member of C. In other words, an outsider can be convincingly strong against the top tier of C, but not against the set’s collective performance across its top t members.

The innovation is twofold. First, t lets us measure how many members of the set a voter must beat to tilt the balance—acknowledging that large committees should be judged by the ensemble, not by any single most-preferred member. Second, α sets the threshold of breadth: what share of voters can still prefer an outsider for the outside candidate to outrun the group’s top tier? That combination yields a spectrum of fairness notions that interpolate between single-winner outcomes and more expansive, consensus-friendly results. The punchline from the paper is crisp: for any fixed t and α, there exists a (t, α)-undominated set whose size is on the order of t/α. Moreover, as t grows, the optimal size tends toward t/α, which the authors prove is asymptotically tight. This reframes the problem from “find a winner” to “find a small, robust shortlist.”

LEO, the ordinal engine behind the math

At the heart of their construction is a mechanism borrowed from economics, adapted to the ordinal world of ranked preferences. They deploy something called Lindahl Equilibrium with Ordinal Preferences, or LEO. In a classical Lindahl setup, voters exchange money (or tokens) to fund public goods, and the “prices” each person pays are tailored to their preferences so that, collectively, the provision of goods clears the market. The leap here is to replace deterministic prices with randomized incomes, yielding a lottery over candidates rather than a fixed, single allocation. The prices each voter faces determine which candidate they most likely get, but the randomness allows the whole system to respect ordinal rankings without needing precise cardinal utilities. This is a clever bridge between economic equilibrium ideas and the messy reality of ranked ballots.

In the paper, the authors introduce a threshold distribution Iβ,ε that smoothed out a simple two-point distribution (spike on 1 with probability β, 0 otherwise). With this threshold, they construct a Lindahl equilibrium where the total budget B controls the size of the committee and where the common allocation y over candidates acts like a lottery. A key trick is showing that a candidate who falls within a voter’s top 1 − ε percentile of the lottery faces a price near 1, making it expensive for outsiders to beat the committee’s top tier. This setup lets them bound how many voters could possibly prefer an outsider to the chosen set, which is the heart of establishing undominated sets of bounded size.

The method is not a one-off calculation but a framework. To generalize to the (t, α)-world, the authors introduce a scaled version of the same idea, called s-SLEO, where consumers (voters) effectively distribute their attention across multiple top candidates. They then turn the fractional, lottery-based solution into a randomized committee via a standard dependent rounding technique that preserves marginal probabilities while maintaining negative correlation. The upshot is a principled path from an ordinal, probabilistic equilibrium to an actual, finite set of candidates with provable properties.

Five candidates, a Condorcet win, and how they prove it

The paper’s standout corollary for the t = 1 case is both elegant and surprisingly concrete: there exists a Condorcet winning set of five candidates for any election. To see how they reach this, imagine drawing a single candidate according to the LEO lottery and then repeating this process k times to form a set of k candidates. The authors carefully choose a threshold β and the smoothing ε so that, with high probability, the randomly drawn set covers most voters’ top preferences across a broad swath of the electorate. They then define boundary candidates for each voter—the most preferred candidate they could still afford at a given price threshold—and show that the total “price” assigned to any outsider across all voters cannot exceed the collective budget if the committee is well-constructed. As a consequence, no outsider can outrun the committee among more than a β-fraction of voters, which is tuned to match the undominated condition for k candidates.

The result is both intuitive and powerful. It formalizes a kind of pragmatic shortlisting: a handful of well-chosen candidates can secure broad legitimacy even when a single winner would struggle to claim a majority. The five-candidate bound tightens the previous best-known six-candidate bound and, crucially, nails down a concrete, finite target that could be relevant for real-world settings—think conference committees, grant panels, or any process where a shortlist carries the decision power forward.

Generalizing to t and alpha a: an asymptotically tight bound

The second pillar of the paper tackles the much broader question: how large must a (t, α)-undominated set be, for any t and α? The authors prove that there exists a committee of size roughly δ(t) · t/α, with δ(t) bounded between 1 and 4.75 for all integers t ≥ 2, and with δ(t) shrinking toward 1 as t grows. In other words, as you ask the committee to represent more of the top tier (larger t), you can guarantee undominance with a relatively small set, and the bound becomes tight in the limit.

To achieve this, they adapt the LEO framework into a scaled version called s-SLEO, where the amount a voter’s consumption of candidates is allowed to scale with t. They then apply a careful two-step rounding, first turning the fractional allocation into a lottery over sets of size roughly B (the budget), and then employing an iterative rounding algorithm that peels away uncovered voters step by step while shrinking the budget. The result is a constructive method: you don’t just know such a set exists; you can approach it algorithmically, with guarantees on both size and fairness across the electorate.

There’s a subtle but important caveat the authors address: simply concatenating t copies of a k = α-undominated solution does not automatically yield a (t, α)-undominated set. The voters’ preferences can align differently across the separate subcommittees, and their intersections may be small or even empty. The clever move here is to use a scaled equilibrium and adaptive rounding to ensure that across iterations, enough voters are covered in a compatible, cumulative way. The upshot is a robust, generalizable path to fair shortlists that scale gracefully with the depth of comparison (t) and the breadth of accepted outsiders (α).

When α is small, the authors’ iterative method with s-SLEO yields the tightest guarantees; when α is larger, a more direct bound suffices. They also supply a lower-bound construction showing that the t/α scaling cannot be improved in general. Taken together, these results sketch a clean, if nuanced, picture: you can always assemble a compact, fair set of candidates whose size is responsive to how deeply you want to compare outsiders, and that size is essentially the best you can hope for in the worst case.

Why this matters beyond voting booths

This work isn’t about replacing elections with algorithmic c… it’s about asking a more nuanced question: when a single winner is elusive, what is the next best democratic gesture? The framework offers a principled justification for shortlists, juried panels, and other collective-choice settings where consensus matters but unanimity is impossible. By allowing the decision process to be modeled around a small, fair set of candidates rather than a lone champion, the method acknowledges diversity of preferences while still delivering decisive outcomes.

One of the most promising implications lies in the realm of participatory budgeting and resource allocation, where the “right” mix of projects often cannot be boiled down to a simple yes-or-no vote on a single item. The ordinal, equilibrium-based approach can accommodate how people actually think about trade-offs—ranking, weighing, and comparing sets of options. It also translates to modern decision-making systems in AI and machine learning, where ordinal data—rankings and preferences—are common inputs and where fairness and representativeness are critical design constraints. The authors explicitly connect their framework to these broader questions, suggesting a pathway from theoretical guarantees to practical, trustworthy decisionmaking tools.

And there’s a human dimension that can’t be overstated. Shortlists and multi-candidate outcomes can mitigate feelings of alienation when no single candidate feels like a true winner. If we measure success not by an ultimate victor but by a small, robust set that earns broad support, the decision process becomes more inclusive without becoming paralyzed by disagreement. It’s a way to acknowledge the reality of plural preferences while preserving the momentum needed to move decisions forward.

Roots, people, and the future of fair decision-making

Behind the mathematics is a practical, human-facing conviction: fairness isn’t a binary, all-or-nothing property. It lives in the balance between how deeply people compare options and how many people must agree for an outsider to be excluded. The work of Song, Nguyen, and Lin—anchored in Purdue University and Melbourne Business School—offers a bridge from abstract theory to usable, tunable criteria for fairness. The results don’t just push the boundaries of what is provably possible; they sketch a viable pathway for institutions that want to be both decisive and inclusive.

As with many advances in computational social choice, there’s work to be done before these ideas wander into everyday practice. Real-world elections and selection processes come with noisy data, strategic behavior, and a host of logistical constraints. Yet the core insight—that a small, carefully constructed set of candidates can deliver robust, broad-based legitimacy even when a single-winner outcome is out of reach—feels both timely and transformative. If you’ve ever wondered how to reconcile the desire for a fair process with the reality of diverse opinions, this research gives a language and a family of tools to talk about it more confidently.

Note on the institutions: the study is a collaboration anchored in the Purdue University Department of Computer Science and the Daniels School of Business, with involvement from Melbourne Business School. The lead authors are Haoyu Song, Th`anh Nguyen, and Young-San Lin, whose work here advances a unifying view of how we might select, together, in a world where consensus is precious but not always singular.