The Nokia-era game Snake has always teased a bigger question behind its simple rules: what’s the longest path you can grow before you loop back on yourself? In a new twist on that playground favorite, a trio of Dutch researchers from the University of Twente—Denise Graafsma, Bodo Manthey, and Alexander Skopalik—took the idea and fired it at the abstract world of graphs. Their paper, published through the Mathematics of Operations Research lens, asks what happens when you transplant Snake from a screen to a connected graph. The result is a surprising blend of puzzle, computer science, and pure graph theory that shows how a child’s game can illuminate some of the deepest questions about planning, strategy, and complexity.
The study, conducted at the University of Twente in Enschede, Netherlands, reframes Snake as a two-player contest. One side—the snake—tries to grow until it occupies every vertex of the graph. The other side—the apple placer—gets to decide where the next apple (the thing the snake must eat to grow) appears, with the grim job of ensuring the snake cannot simply march through the graph unimpeded. Only one apple is present at a time; when the snake eats it, a new apple appears somewhere else that’s currently unoccupied. The central question is compact but thorny: does there exist a strategy for the snake to guarantee eventual victory on a given graph, no matter where the apple is placed? In other words, is the graph snake-winnable?
Beyond the warm glow of a clever puzzle, the authors’ answer cuts to the heart of computational complexity. They prove that deciding snake-winnability is NP-hard, even when you confine the game to grid graphs—the very shapes most people associate with classic puzzles and computer graphics. They also lay down sharp structural boundaries: Hamiltonian graphs—the graphs that contain a cycle visiting every vertex exactly once—always give the snake a win, but non-Hamiltonian graphs with certain structural quirks can make the snake stumble. The work is a vivid reminder that “easy to describe” rules can still hide the kind of hard decisions that keep computer scientists up at night.
The Game on Graphs
To understand the paper, imagine a connected graph G, a network of dots (vertices) connected by lines (edges). The snake locks itself into a simple path of vertices, the head at one end and the tail at the other. Each move you make slides the head forward along an edge to a neighbor; the tail retreats unless the snake has just eaten the apple, in which case the snake grows longer and no tail is removed. The head can move to a vertex that is either unoccupied or currently the tail; moving into any other occupied vertex would break the snake’s path and is forbidden. When the head lands on the apple, the snake grows and a new apple appears somewhere else on the graph. The snake wins if it can eventually occupy all vertices; the snake loses if it ever has no legal move or if it repeats a previously seen configuration with the same length—an elegant way of disallowing endless cycles that merely stall progress.
Two players, two different goals. The snake fights to eat every vertex; the apple placer acts as an adversary, selecting where the next apple will appear to thwart the snake’s growth. If the snake can force a win no matter where the apples pop up, the graph is snake-winnable. If the apple placer can foil the snake, the graph is not. It’s a two-player game, but not in the usual “pursuit” sense. There’s a tug-of-war between growth and obstruction, a dance of pathology and potential that resonates with real-world planning problems where you must grow a project while managing a self-imposed or environmental constraint.
As the authors lay it out, several easy-to-check observations help orient the landscape. If the graph lacks a Hamiltonian path, the snake surely cannot win. If the graph is Hamiltonian (it has a Hamiltonian cycle), the snake can ride that cycle and eat its way to full occupancy. In bipartite graphs, the sizes of the two parts matter: if the two parts differ in size by more than one, snake-winnability collapses. And if some vertex has degree 1, the graph is not snake-winnable. These are not just cute footnotes; they sketch the skeleton of when a graph might be a friendly habitat for a hungry snake and when it’s a trap.
Complexity and the Theta Gadget
One of the paper’s bold moves is to pin snake-winnability to a level of computational hardness that researchers have long studied with similar abstract games. The authors prove that deciding whether a graph is snake-winnable is NP-hard, even when restricted to grid graphs—think of those tidy rectangular patterns that underlie much of computer graphics and puzzle design. The punchline is not just that the problem is hard; it’s that it remains hard even in a world that looks deceptively simple, where every vertex is a little square on a grid and every move feels almost mechanical.
How do they prove this? They lean on a precise structural fingerprint: a particular spanning subgraph called Θ(|V|−3, 2, 2). A spanning subgraph contains all the vertices of the original graph but only a subset of edges. The symbol Θ(p, q, r) describes two vertices connected by three internally disjoint paths of lengths p, q, and r. The authors show that if a bipartite graph with an odd number of vertices contains a Θ(|V|−3, 2, 2) spanning subgraph, then the snake has a strategy to win; if not, the apple placer can force a loss. That is, snake-winnability on these graphs is equivalent to the presence of a specific substructure—an elegant, almost architectural criterion for a problem that otherwise looks like a game of luck and cunning.
This characterization becomes a fulcrum for their NP-hardness reduction. They start from the classic Hamiltonian cycle problem on grid graphs, a known NP-hard problem, and attach a gadget to transform any grid graph into another graph G′. The magic is that G′ has a Θ(|V′|−3, 2, 2) spanning subgraph if and only if the original grid graph had a Hamiltonian cycle. Since Hamiltonian cycles are the canonical NP-hard problem on grids, the snake-winnability trade-off inherits the same hardness. The paper outlines the reduction in careful, constructive steps, showing that the complexity is baked into the graph’s very topology rather than into some mysterious human-level cleverness.
One open frontier the authors flag is whether the snake problem sits neatly inside NP—that is, whether a winning strategy can be verified efficiently given a certificate. The current work stops short of confirming that; the existence of a compact description of a winning strategy remains an intriguing puzzle, much like many of the practical planning problems whose verifiability is nontrivial even when the answer is known.
Girth, Hamiltonians, and Boundaries
Beyond complexity, the paper digs into a geometric flavor of graphs: girth, the length of the shortest cycle. For non-Hamiltonian graphs with girth greater than six, the authors prove a striking no-win result. If the graph does not admit a Hamiltonian cycle and all its cycles are comparatively long, the apple placer has the upper hand once the snake reaches length |V|−3. The argument is a careful braid of cycle lengths, the snake’s expanding footprint, and the fact that long cycles constrain how the snake can thread itself through the graph without creating a shortcut that would let it finish the job.
That may sound abstract, but the intuition is relatable: as the snake grows, the “budget” of room it has to maneuver around itself shrinks. If the graph’s shortest loops are too long, the snake cannot exploit a flexible path to trap the apple and must eventually stumble into a trap that the apple can force. The bound g(G) > 6 is not an arbitrary wall; the authors present a tight example showing the bound is achievable. They even illustrate a non-Hamiltonian graph with girth 6 that remains snake-winnable, underscoring how tight and nuanced these thresholds can be. In short, the geometry of cycles—how short or long they are—tells a story about whether a snake can outmaneuver an adversary that keeps throwing apples at it.
The chapter on girth adds texture to the earlier complexity results. It shows that the question is not merely “can the snake win?” but “under what structural constraints can the snake adapt its strategy as it grows to nearly fill the graph?” The interplay between cycle lengths and the snake’s own growing length becomes a kind of game of proportion, not just of logic. And while Hamiltonian graphs guarantee victory, the world beyond Hamiltonicity is where the math becomes visibly delicate, almost architectural in its requirements for the path the snake must trace.
Vertex Connectivity and a Single Critical Point
Graphs that have a cut vertex—an essential single point whose removal disconnects the graph—pose a different kind of challenge. The authors show that in a world with vertex-connectivity 1, there is essentially only one “shape” of snake-winnable graph: two large components G1 and G2 that become complete when you attach the cut vertex back in (G1 + v and G2 + v are both complete), and with the two components having the same size at least two. In that exact configuration, the snake can shuttle between the two halves around the cut vertex, eventually occupying everything as long as the sizes balance perfectly and both halves are themselves complete when hooked to the cut vertex.
The heart of the argument is a delicate accounting of how many vertices remain unoccupied in each subgraph and how the apple’s choices can influence the snake’s reach. If the two components differ in size, or if the attached subgraphs aren’t complete, the apple placer can steer the game to a loss for the snake. In other words, when a graph collapses onto a single hinge point as its bottleneck, the snake’s fate becomes entirely a function of this hinge and the uniform fullness of the surrounding blocks. The authors formalize this with a sequence of lemmas that show that symmetry in size and completeness is the only path to a winning strategy in such a sparse connectivity regime.
Put differently, the 1-vertex-connectivity world is a laboratory where the graph’s weak link dictates the entire outcome. It’s a reminder that sometimes global questions hinge on local structure: a single vertex can determine whether a game on a graph is winnable or not, turning a broad landscape into a narrow passageway of possibilities.
Why This Changes How We Think About Puzzles and Planning
Beyond the delightful aesthetics of a clever puzzle, this work speaks to a broader truth: the boundary between “easy to describe” rules and “hard to solve” outcomes is often a matter of structure, not scale. Snake on a graph is a deceptively simple game, yet the paper reveals a web of deep dependencies—on Hamiltonicity, on the size parity of bipartitions, on girth, on connectivity—that together decide whether growth can triumph over obstruction. In computer science terms, this is a vivid demonstration of why certain decision problems resist efficient universal solutions even when their input looks like a grid world from a video game.
There’s a cultural resonance here, too. The study gently borrows from classic graph problems—Hamiltonian paths, grid graphs, and the famous Θ subgraphs—and translates them into a living game that can be played out with apples and a snake. The result is a bridge between theory and play: a reminder that the same mathematics that underpins routing in a network or planning a robot’s path can also describe the suspenseful arc of a puzzle where every move changes the future possibilities. The authors’ careful mapping from game rules to graph properties is not just clever; it’s a blueprint for how to extract meaningful insights from simplified abstractions.
From a practical standpoint, the paper reframes how we think about algorithmic planning and game theory. If a problem as familiar as a single-player Snake can encode NP-hardness on grid graphs, what does that imply for more realistic, real-world planning problems—such as autonomous navigation in uncertain environments, or even strategic game AI where the “apple” can be adversarially placed? It’s a reminder that optimal play in complex environments often mirrors the structure of the underlying graph: the network’s geometry can be as decisive as the rules themselves.
What We Learn About the Nature of Hard Problems
In the end, the paper is as much about what we don’t know as what we do. The authors show that, despite a simple setup, deciding snake-winnability is as hard as deciding whether a Hamiltonian cycle exists in grid graphs. That means the problem is not just difficult in practice; it is hard in principle. They also pose honest, open questions: is the problem in NP? Can we extend the complete characterizations beyond odd-sized bipartite graphs or beyond vertex-connectivity 1? What about even-sized grids, or higher connectivity? These are not mere curiosities; they map directly onto enduring questions in graph theory and computational complexity about how global properties emerge from local rules.
For now, the University of Twente team has given us a vivid framework to think about game-theoretic puzzles as battlegrounds where complexity and geometry collide. Their work invites us to consider how a trivial-seeming pastime—Snake’s chase for apples—becomes a sophisticated lens through which to examine the limits of computation, strategy, and the surprising ways structure shapes fate.
In the end, this is not just about a game. It’s about a way of looking at the world: when do simple rules yield predictable outcomes, and when do the shapes of the spaces we inhabit—whether a grid, a labyrinth, or a vast network—conspire to make the impossible feel possible or the possible feel impossible? The study from the University of Twente doesn’t just tell us the answer to whether a snake can win on a given graph. It tells us where the line lies between strategy and structure, and how that line shapes the kinds of problems we can hope to solve—and which ones might forever resist a neat, efficient solution.