The Curious Game of Permutation Wordle and Its Optimal Shuffle

On a desk somewhere between a notebook and a cup of coffee, a puzzle about order starts to feel almost alive. It’s not a word you’re trying to spell, but a sequence you’re trying to arrange perfectly. You guess a lineup of 1 through n, and a game master tells you which positions match the secret order. The aim is simple in theory, but the path to the finish line is a lush playground for ideas about memory, strategy, and how math reveals the best way to chase a solution.

This is permutation Wordle, a playful successor to Wordle that comes with its own twist: the secret is a permutation, a reshuffled lineup of the numbers 1 to n. The paper by Rutgers University mathematician Aurora Hiveley digs into a big question behind the game: is there a simple, clever rule that always gets you to the correct order faster than anything else, at least for a few rounds of guessing? The work builds on the seed planted by Samuel Kutin and Lawren Smithline in their original Permutation Wordle idea, and it asks not just for a clever shortcut, but for a proving-ground test of optimality through the lens of generating functions and counting. In short, it’s a friendly puzzle that nudges us toward deep questions about how we learn, search, and improvise under partial information.

What makes this study especially human is its curiosity about a tiny, almost trivial rule that could unlock a surprising amount of efficiency. If you imagine playing along with a whiteboard and a chalky clock ticking in the background, you’ll feel the tension between a heuristic that feels right and a mathematical guarantee that it actually is the best choice—at least in the specific moments the paper studies. And that’s part of what makes math feel alive: a deceptively simple game can propel you into a landscape of patterns, proofs, and why certain ideas feel inevitable once you see them from the right angle.

Across these pages, Hiveley’s article isn’t just a record of a clever trick. It’s a map of the conversation between experiment and proof, between what we can try with a computer and what we can reason about with equations. The work is anchored in Rutgers University, where Hiveley analyzes the problem in dialogue with the original setup by Kutin and Smithline. The central thread is deceptively simple: if the first guess is the identity permutation [1, 2, …, n], what happens next depends on which positions turn out to be correct. If a position is correct, you lock it in; if it’s not, you shuffle the incorrect pieces according to a fixed rule. The big question becomes, which rule minimizes the number of guesses in the long run—and can one rule beat all others in key, short-term horizons?

What is permutation Wordle?

In permutation Wordle, the secret is a permutation of the set {1, 2, …, n}. A guess is another permutation of that same set. After each guess, the game tells you exactly which positions match the secret: those i for which your guess γ[i] equals the secret π[i]. The game ends when every position matches, when your guess equals the secret permutation in full.

To illustrate, imagine n = 5 and the secret is some permutation π of 1 through 5. Your first guess is the straightforward γ1 = [1, 2, 3, 4, 5]. If the game says that position 2 is correct, you now know that the second slot is fixed to be 2, and you’ll arrange your next guess accordingly. The core mental math then becomes: which rearrangement of the remaining entries will reveal more correct positions in the fewest further moves?

The paper walks through a concrete rhythm of play: after a guess, the strategy reshuffles the entries that were wrong. The inevitable question is about the nature of that shuffle. Is there a rule that, whenever there’s something wrong, pushes those wrong pieces one step to the right in a cyclic loop? That is the heart of the cyclic shift idea, a simple and elegant mechanism that the authors call CS for cyclic shift.

The simple rule behind the chase

The cyclic shift strategy is exactly what it sounds like: take all the incorrect entries from a guess and move them one step to the right, leaving any correct entries alone. If you had a messed-up lineup of numbers, you’d rotate the bad pieces like a tiny, orderly carousel, but you never disturb the ones you already got right. The process is deterministic: the rule tells you what to do with every wrong entry, so the next guess follows automatically from the last one.

Hiveley takes care to frame not just the rule but what it means for optimal play. A strategy is described as a sequence of components S[1], S[2], …, S[n], where each S[i] is a permutation of i items. The gist is intuitive: you use the “right-sized” version of the strategy for the subset of elements you still don’t know. If you correctly place some positions, you lock them in and you apply the strategy to the rest. The cascading question then becomes, does this simple right-shift rule keep delivering more correct matches earlier than any other process would, at least for a game that ends in two or three guesses?

One key detail the paper emphasizes is the necessity that the strategy components be derangements—permutations with no fixed points—so you don’t waste guesses by shuffling something that’s already in the right place. That constraint keeps the analysis honest: every move must learn something new about the secret permutation.

To see CS in action, consider a short example with n = 4. Start with γ1 = [1, 2, 3, 4]. If none of these positions is correct, you type γ2 = [2, 3, 4, 1], shifting all four elements to the right. If one of those new positions matches, you’ve learned a little more about π and you keep moving, locking in the correct ones and shuffling the rest. The question Hiveley and colleagues pursue is whether this one-step-per-round shuffle is the most efficient path to the finish line—across all the possible hidden secrets the game might hold.

Seeing the strategy through generating functions

One of the paper’s core tools is generating functions, a compact way to count how often a strategy wins in a given number of guesses. For a fixed strategy S of length n, the generating function fS(x) encodes, in its coefficients, how many secret permutations can be guessed in exactly r guesses. A coefficient [xr]fS(x) tells you the count of secrets solvable in exactly r moves.

Two facts sit at the foundation of the analysis. First, no matter what strategy you pick, you can always solve a game in one guess if the secret is the identity permutation, so the coefficient of x1 is always exactly 1. Second, for the second guess, the number of permutations you can reach is tied to a classic family of numbers in combinatorics called Eulerian numbers. Those numbers count “excedances” in permutations, which is a fancy way of saying how many times an element jumps ahead of where it started. In the simplest case, the cyclic shift strategy famously aligns with these Eulerian patterns, giving the second-guess performance a clean, recognizable shape.

What’s more interesting to Hiveley is what happens to the cubic coefficient, the coefficient of x3, when you look at three-guess games. The paper doesn’t just compute a single number; it compares the CS strategy to other inductively constructed strategies—those that follow the same small-step logic but might differ in how they treat the last, largest block of elements. The main numerical pattern is this: for the three-guess horizon, CS tends to maximize the number of secrets you can solve, compared to other inductive strategies. In other words, the simple right-shift rule isn’t just cute—it appears to be optimally efficient for finishing the game in three moves, at least within the family of strategies the author studies.

The analysis is not purely experimental. Hiveley and colleagues back it with careful counting arguments, using the generating function’s coefficients to track how many secrets would terminate at each stage of the game. They even give precise counts for certain special cases and show where a rival strategy would lose ground. It’s a vivid illustration of how a seemingly tiny design choice—a shift to the right—can tilt the odds in a surprisingly substantial way when you look at the space of all possible secrets the game could hide.

What the main takeaways say about optimal play

There are a few layers to the conclusions, and they’re worth unpacking. First, within the world of cyclic strategies—where the rule keeps cycling a fixed way of permuting the wrong entries—CS is shown to be optimal for games that end in exactly three guesses, when the strategy length is at least four. In plain terms: among the natural, repeatable rightward-shift rules, CS does better at forcing a three-move finish than its cousins do.

Second, the work contrasts CS with a family of inductive strategies, where the small-step logic is applied recursively to subparts of the problem. The striking message is that the “best you can do” for three-guess endings is achieved by CS, and any inductive strategy that isn’t CS loses some ground. The authors even quantify, in a precise, combinatorial way, how many secrets would still stubbornly resist a three-move finish under other inductive rules.

There’s a delightful twist in the mathematics: when the analysis digs into the worst-case cubic coefficient among some of these strategies, a familiar pattern from number theory emerges—the Lucas numbers surface in counting certain legal configurations. It’s a reminder that even a clean, game-like problem can whisper about broad mathematical structures that echo across topics as diverse as tilings, sequences, and the geometry of numbers.

But the paper also stays clear-eyed about its scope. The results are proven for one-, two-, and three-move horizons, and for a fairly narrow family of strategies (largely based on cyclic or inductively defined cyclic components). The author notes that extending the same kind of proof to four or more moves would require new ideas, and that broadening the strategy space beyond cyclic permutations would likely shake things up. In other words, the story is compelling and rich, but not yet a closed universal theorem about all possible strategies in all possible lengths.

Why this matters beyond a neat puzzle

What makes this work compelling beyond the itch of a clever game is the way it blends hands-on experimentation with rigorous proof, and it does so in a way that feels human and approachable. The project doesn’t pretend that one rule rules them all; rather, it demonstrates a precise sense in which a particular heuristic emerges as especially potent for the three-move window. That kind of result is instructive for how we think about strategy design in real-world problems that look like a game: search problems, optimization under partial information, even certain kinds of automated testing where you must learn quickly from imperfect feedback.

The researchers emphasize that their conclusions are anchored in a specific, carefully defined model: the strategy components are fixed, the feedback is exact, and the game ends after a small, known number of moves. In the larger landscape of AI, optimization, and algorithm design, such models are the lab where ideas can be tested cleanly before attempting to generalize to messier, real-world settings. The work shows that sometimes a deceptively simple rule—rotate the wrong pieces and leave the right ones alone—can actually be the best possible move within a well-scoped horizon. It’s a reminder that elegance and effectiveness aren’t mutually exclusive; sometimes they reinforce each other in the exacting language of combinatorics.

For readers who crave a human touch to mathematics, the study also offers a template for how to interrogate a problem: start with a clear, tangible rule, push it through a formal accounting of outcomes, and then compare it against plausible alternatives in a structured, transparent way. It’s the math version of a well-researched opinion piece: not just a claim, but a chain of reasoning that others can inspect, challenge, and build on.

Where the conversation goes from here

Hiveley’s work is a compelling step, but not the final word. The paper is careful to flag the boundaries of its results and to point to opportunities for future exploration. Extending the analysis to four or more guesses, or broadening the class of allowed strategy components beyond cyclic shifts, will require new ideas and perhaps more computational horsepower. The Maple-based experiments cited in the study give a sense of how far the current techniques can push before the combinatorics become unwieldy, and that is telling of how hard—and how human—the problem remains as the horizon grows.

Still, the study shines a light on a small corner of a very large landscape: the art and science of strategy under partial information. It’s easy to overlook how much of everyday problem-solving looks a lot like playing permutation Wordle with the clock ticking. The insights here—about how a simple rule can harness structure to produce fast solutions, and about how to prove such claims with generating functions and careful counting—are exactly the kind of tools that can be repurposed for other puzzles and algorithmic challenges. The curiosity that drives this work is not about one game; it’s about the way math helps us see, with surprising clarity, which paths are the most promising when the trail ahead is foggy but the goal is crystal clear: make the fewest moves to get it right.

With a bow to the origins in Kutin and Smithline’s Permutation Wordle, Hiveley’s Rutgers-backed inquiry invites readers to see a familiar toy as a training ground for deeper patterns. The question it leaves in the air is as human as any: in a world of partial feedback, what’s the smartest way to move forward—and how often does the simplest rule happen to be the best one at hand?

Note: The study sits in a lineage of mathematical exploration around permutation puzzles and generating functions, anchored in the work of Samuel A. Kutin and Lawren M. Smithline. The author, Aurora Hiveley, is affiliated with Rutgers University and builds on their ideas to probe when a cyclic shift really is the optimal move for a three-move finish.