The Moon near its poles is a theater of light and shadow, a place where perception is as much a chemistry of photons as a test of courage. The Sun lingers near the horizon, casting shadows that stretch like dark fingerprints across crater floors, while bright patches glare with unrelenting glare. For robotic explorers, that combination—very little ambient light and long, shifting shadows—is a real-life laboratory in HDR, where the usual tricks of photography and computer vision can fail at the worst possible moment. This is the kind of environment that makes engineers rethink how a robot should see, navigate, and decide where to point its metal feet next.
Enter SPICE-HL3, a new, deeply ambitious dataset built to simulate high-latitude lunar landscapes with unprecedented realism and scale. The project, developed at the LunaLab facility of the SnT – University of Luxembourg, with leadership from the Space Robotics Lab at the University of Malaga, brings together a battery of cameras, inertial sensors, wheel odometry, and, for the first time in this domain, a single-photon camera. Lead author David Rodríguez-Martínez and colleagues have stitched together 88 sequences across seven rover trajectories, under four lighting conditions—from dawn to night—totaling nearly 1.3 million images. This is not a toy dataset. It’s a bridge between Earth-bound experiments and future lunar missions, a way to stress-test perception and navigation in a setting that makes Mars look comparatively forgiving.
What SPICE-HL3 is and how it was built
The LunaLab facility in the University of Luxembourg’s Space Robotics Group mirrors a lunar terrain that is maddeningly photogenic in its difficulty: an 11 by 8 meter field filled with basaltic gravel, rocks of various sizes, and ground reflections that shift as light slides across irregular surfaces. The researchers can nudge illumination from a single spotlight to mimic the low solar elevations you would see at the poles, and they can do so under four labeled conditions—Reference, Noon, Dawn/Dusk, and Night—so the field can look almost like a day-night cycle on the Moon, even as it remains safely on Earth. To ground truth perception and motion, the scene is surrounded by motion-capture cameras that track the rover’s position to sub-millimeter precision, producing a crystal-clear reference against which algorithms can be tested. This calibration of synthetic realism with precise ground truth is critical when you’re trying to model a environment that, in real life, would be utterly alien to a camera’s typical expectations.
The sensor suite is a tour of perception tech. Two fictionlab’s Leo rovers carry a monochrome Teledyne FLIR camera, a stereo-inertial ZED2 rig, and, for the first time in this setting, a SPAD5122 single-photon camera from Pi Imaging Technologies. The SPAD is the star of the show: a detector that responds to individual photons with binary readouts at astonishing speeds, capable of more than 100,000 binary frames per second and exposure times down to the microsecond scale. In practice, a SPAD frame is a 1-bit image—each pixel either detected a photon or not—whose data can be batched and integrated later to form higher-bit-depth images. The contrast, the timing, and the near-total absence of readout noise give SPADs a superpower in perceptually degraded environments where conventional cameras either saturate or fall apart in low light. The team also demonstrates how to coax practical 8-bit visual data out of 1-bit frames through software that digitizes multiple binary frames into higher-bit representations, a workflow that makes the tech accessible to standard computer-vision pipelines.
The data collection spans seven trajectories labeled A through G, with both static and dynamic phases. Trajectory A is a long, compound path designed to test viewing along a corridor of eye-lines, capturing static frames at 70 separate waypoints. Trajectories B through E are short but dynamic passages where the rovers move at two speeds—slow and fast—while Trajectories F and G provide continuous driving in more open terrain, creating looped and back-and-forth motion. Across all trajectories, the dataset includes imagery from the SPAD, FLIR, and ZED2 cameras, along with wheel odometry and inertial measurements, all time-stamped and synchronized with sub-millisecond precision. Altogether, the dataset yields 1,289,958 images with ground-truth pose data that endow researchers with the ability to test state estimation, mapping, and navigation in conditions that stress perception to the edge of feasibility.
There is a practical side to this heavy lifting. The SPAD camera, for example, can operate in a 1-bit mode that produces raw binary frames, or in a 4-bit mode that captures more nuanced information. The researchers built custom scripts to orchestrate the SPAD’s data capture, because the device doesn’t publish a straightforward, clocked frame stream out of the box. They also generated a full software stack to convert the low-bit data into visually interpretable formats and to export the data as ROS2 bags for downstream experimentation. In addition to the raw data, the paper provides intrinsic and extrinsic calibration parameters for each camera, along with a ground-truth reference frame and a CAD model of the rovers, making the dataset immediately usable for researchers who want to compare a suite of perception and localization algorithms under a common, well-characterized set of conditions.
It’s not just the scale that matters; it’s the fidelity. The team has pushed the analog lunar sim to the limits of what you can reproduce indoors—from the way the light falls off with sun angle to the way rough terrain creates long, moving shadows that hide or reveal landmarks as a rover approaches. The result is a resource that can anchor both perception research and mission planning discussions about how autonomous rovers could navigate near the Moon’s poles when the Sun sits low in the sky for days at a time. The work is a tangible example of how laboratories on Earth are building testbeds that look, feel, and behave more like space than a quiet campus lab ever has before.
Single-photon imaging and the future of perception in space
SPADs are not just fancy toys; they are a rethinking of what it means to “see” in a world where photons are precious and time is long. In the SPICE-HL3 study, the SPAD5122 camera records binary frames with ultra-fast exposures, a capability that lets it bravely chase faint details through dawn, dusk, and night—conditions that routinely bedevil conventional cameras. The dataset’s qualitative and quantitative comparisons show a fairly consistent pattern: SPAD frames tend to preserve structure and edge detail better than traditional cameras under low light and high dynamic range, while still offering robust performance when the light is unexpectedly harsh near the horizon. In practical terms, SPADs can deliver usable imagery in scenarios where a standard camera risks full-frame saturation or crushing shadows into near-black nothingness.
In their evaluation, the authors compare SPAD data to the FLIR monochrome camera and the ZED2 stereo rig. In dawn and night drives, SPAD frames reveal a higher dynamic range, lower perceived noise, and more uniform intensities, even when the exposure is shorter than what a conventional camera would require to extract details. This matters because in space, every photon counts: a single photon can be the difference between identifying a rock as a navigable landmark or a feature that vanishes into a black hole of shadow. The SPAD’s binary nature also changes the computational calculus downstream. The authors demonstrate a striking, if preliminary, result: even with low-bit-depth data, classic feature-descriptor pipelines (SIFT, SURF, ORB) can still detect enough structure to bootstrap localization, albeit with trade-offs in speed and reliability. The bit-depth trick—turning 1-bit frames into 8-bit-like representations—opens a pathway to fast, memory-efficient perception that could be crucial for small landers and rovers with limited onboard compute.
Beyond the sensor hardware, the paper digs into the implications for navigation software. The researchers ran two popular visual-odometry pipelines, ORB-SLAM3 and RTAB-Map, on the ZED2 data and found that while stereo-depth helps, the extreme lighting often trips up tracking. Wheel odometry remains a surprisingly sturdy baseline in simple turns, but it drifts in more complex scenes. Pure inertial navigation suffers from cumulative drift. In short, perception alone isn’t enough; robust fusion of vision, proprioception, and careful calibration is essential. The authors call this a domain-specific challenge: the lunar poles demand not just better sensors but tailor-made localization pipelines that can cope with long, moving shadows, specular glare, and abrupt transitions from light to dark as the rover watches the horizon sweep by in its field of view.
In the end, SPICE-HL3 isn’t just about a single camera technology. It’s a case study in how a sensing modality shifts the design space for autonomous navigation. SPADs could enable higher duty cycles (more continuous operation) at lower exposure, or be combined with smarter fusion strategies that treat low-bit frames as a new kind of probabilistic input rather than a degraded version of a regular image. If the Moon’s poles truly become a target for science outposts and resource prospecting, SPAD-based perception could be a critical ingredient in keeping assets safe, productive, and scientifically productive when the light is stubborn and the terrain unforgiving.
From lab to Moon missions and the limits of simulators
This dataset is not a claim that the Moon is now thoroughly conquered in software or hardware—it’s a careful, honest attempt to bridge the gap between Earth-bound lab experiments and the unpredictable realities of lunar exploration. The authors acknowledge a long list of caveats that come with indoor simulators: the lunar regolith used in LunaLab has grain sizes that differ markedly from actual lunar soil, which influences how light scatters and how surfaces look under low sun angles. The lighting, though adjustable, cannot perfectly reproduce the way a collimated solar beam would illuminate a vast, desolate landscape miles away. And while the ground truth is precise on the lab floor, real missions will have to contend with delays, hardware jitter, and the harsh vacuum environment that pushes sensors beyond their terrestrial comfort zones.
There are also real data engineering headaches that the study openly discusses. Synchronization across multiple recording systems, occasional frame delays in ROS2, and moments of stray infrared from motion-capture equipment add noise to a dataset that is otherwise meticulously curated. The team even catalogs how small illumination artefacts—like the glow from LED rings on motion capture cameras—can subtly color the data. The upshot is a candid reminder: building autonomous systems for the Moon requires not just heroic hardware but a mature software stack that can identify, diagnose, and compensate for the kinds of edge-case imperfections that only show up in field-like conditions.
So what does this mean for real missions? The SPICE-HL3 dataset is a valuable testbed for perception and localization algorithms in GNSS-denied, perceptually degraded, and unstructured environments. It provides a rare, consistent benchmark against which researchers can measure how close today’s robotics are to being ready for the Moon’s poles. It also points to a broader design principle: the next generation of lunar rovers may need sensor suites and data-processing pipelines tuned to the physics of lunar photometry as much as to the geometry of the terrain. In other words, the Moon’s most vexing problem—seeing when the eye is fighting a perpetual chiaroscuro—might be solved not by a single new sensor, but by a smarter orchestra of sensors that can adapt on the fly, with SPADs playing a lead role in the improvisation.
Beyond the immediate technical implications, the work is also a reminder of the human-scale ambition behind space robotics. It’s a story about a lab bench that looks like a miniature Moon, populated by rovers with pixel-level cameras and high-velocity drives, a reminder that discovery often starts in careful replication and honest engineering tradeoffs. The dataset’s authors — including David Rodríguez-Martínez, Junlin Song, Abishek Bera, C. Pérez-del-Pulgar, and Miguel Ángel Olivares-Mendez — show how a collaboration across University of Luxembourg and University of Malaga can translate a planetary science question into something that a scientist, a student, or an engineer anywhere can study and critique. The end goal is bigger than one dataset or one paper: it’s about building the reliability and intuition engineers need to program machines that can roam the Moon with autonomy, curiosity, and a sense of safety for both machines and the science they carry home.
For readers who want to peek under the hood, the SPICE-HL3 team has provided extensive supplementary material. The dataset is hosted on Zenodo, with visual overviews available on YouTube, and a GitHub repository full of calibration scripts, data-processing utilities, and example pipelines. It’s the kind of resource that invites the community to pull on a thread and see where it leads—whether that thread unravels a new perception algorithm, a more robust SLAM system, or an entirely new way to think about how to teach a rover to see in a world where the light itself is a variable to manage, not a constant to rely on.
In short, SPICE-HL3 is a milestone in making robotic lunar exploration feel a little less like a leap of faith and a little more like a disciplined, shared experiment. It is a reminder that the Moon’s poles demand more than sturdy hardware; they demand vision that can bend with the light, shadow that can be trusted, and a map that can stay coherent as the Sun slides along the horizon. If you buy into the dream of autonomous space science, this dataset is a vivid, data-rich invitation to help build the eyes that will one day carry humans to the edge of night and back again.
Lead researchers and institutions: The work was developed at the LunaLab facility of the SnT – University of Luxembourg, with leadership from the Space Robotics Group at the University of Malaga. Lead author: David Rodríguez-Martínez, alongside collaborators Junlin Song, Abishek Bera, C. Pérez-del-Pulgar, and Miguel Ángel Olivares-Mendez. The collaboration blends expertise from the University of Luxembourg and the University of Malaga to push forward perception, navigation, and autonomy for planetary robotics.