The cosmos does not come with a user guide. It speaks in faint, telltale distortions seen in the shapes of distant galaxies. Those distortions are tiny whispers of a gravitational field that has sculpted the universe over billions of years. Collectively, they become a powerful map of matter distribution and a key to fundamental questions about dark matter, dark energy, and how cosmic structures grow. A new study digs into a practical question about how to read that map more clearly: what if we combine two very different kinds of sky watching—one based on a sharp space image and the other a deep but blurrier ground image? Can this joint approach sharpen our view of the cosmos, or does blending light from nearby galaxies just muddy the signal further?
The work comes from researchers at Ruhr University Bochum and Leiden University, led by Shiyang Zhang, with Shun-Sheng Li and Henk Hoekstra among the authors. It uses realistic image simulations to probe a specific practical challenge known as blending: when light from neighboring galaxies overlaps in the same pixels, it can bias which galaxies we detect, how bright they appear, and how we measure their shapes. The question is not academic. The next generation of weak lensing surveys promises dramatic gains in precision, but that promise hinges on controlling systematic biases like blending as we fuse data from space based and ground based instruments.
To investigate blending in a careful way, the authors set up two parallel worldviews. In one, galaxies sit on a regular grid with fixed separations, so they do not blend at all. This blending free case lets them understand the ideal experiment. In the other, galaxies are sprinkled randomly across the image, creating the messy, crowded skies that astronomers actually face. By comparing the two, they quantify how much blending shifts the measurements and how that shift depends on the different properties of the two survey designs. The stakes are real: a mischaracterized blending effect can masquerade as a cosmological signal or wipe out genuine clues about how the universe hides its mass. The study is a careful reminder that the best path to reliable cosmology often runs through messy realism as much as through clean theory.
What makes this paper particularly timely is its focus on a practical kind of collaboration between two major kinds of sky surveys. Euclid is a space telescope that can image with superb sharpness, almost free of the blurring caused by Earth’s atmosphere. LSST, by contrast, is a ground based program that can reach faint galaxies by stacking many exposures, but its images are blurrier because of atmospheric turbulence. Each survey therefore has its own blind spots and strengths. The researchers ask a simple, consequential question: when we combine data at the catalog level (that is, after each survey processes its own images and we merge the resulting measurements), how much of the potential gain can we actually realize, and how much is lost to blending and other systematics? The answer points to a practical plan for maximizing the science return from the twin eyes of the sky while staying honest about the data challenges they present.
Blending bleeds into the cosmic signal
Blending is not a niche nuisance; it is a fundamental consequence of peering deeper into crowded regions of the sky. In their simulations, the authors compare two setups: grid, where galaxies are laid out so that they never blend, and random, where galaxies are placed as in a real universe and light from neighboring sources overlaps. The results are telling. In LSST like data, with its larger point spread function (PSF), blending reduces the number of galaxies that can be reliably detected. In Euclid like data, with sharper resolution, blending still appears, but its fingerprint is smaller. The key implication is that when we let blending into our simulations, the performance of a joint analysis changes in meaningful, survey dependent ways.
The study also shows that blending biases are not limited to whether we can see a galaxy. It shifts how bright we think a galaxy is (MAG_AUTO in the SExtractor pipeline) and how the light from a neighbor contaminates the measurement. Such biases propagate into the estimated shapes of galaxies, and thus into the estimated shear fields that cosmologists rely on to infer the distribution of matter. In short, blending can masquerade as a cosmological signal if not modeled and corrected for. The authors quantify how the multiplicative bias, which scales the measured shear, grows with fainter galaxies and how it differs between a grid and a random configuration. This is not just a technical appendix; it is a map of where the risk lies in real data analysis and how the two survey designs interact with that risk in different ways.
The upshot is sobering but useful: realistic blending can erode the gains you might expect from combining Euclid like and LSST like images, especially for LSST like data. Yet the sharper Euclid like PSF also helps minimize some of the blending damage. This duality—depth versus resolution—defines the core puzzle of survey synergy. The authors show that to truly understand the benefits of joint analyses, you have to simulate blending with the same care you would use to model the galaxies themselves. In other words, the middle ground where light from multiple galaxies shares a pixel is where the action happens, and where the potential for bias is the greatest if you skip the messy realism.
Two eyes of the sky, one measurable truth
The paper then moves from the problem to the potential fix. If we accept blending as a reality rather than a nuisance, how should we combine the data? The authors examine two catalogue level strategies. The joint catalogue keeps only sources detected by both surveys. The combined catalogue, by contrast, takes every galaxy detected by either survey and uses all of them. Intuitively, the latter should provide more information, as it casts a wider net. The question is how much this broader net buys you in precision for cosmic shear measurements when blending is present.
The numbers tell a clear story. The joint catalogue yields an improvement over Euclid alone for galaxies that are visible in both surveys, but the gain is modest in the regime where the two surveys overlap. The combined catalogue, however, really shines. When you count every galaxy detected in either survey, the effective number density neff climbs to 44.08 arcmin^-2 over the magnitude range 20.0 to 27.5. For context, using LSST like data alone gives about 39.17 arcmin^-2, and Euclid like data alone about 30.31 arcmin^-2. In short, the practical gain comes from letting the two surveys complement each other, rather than forcing a strict intersection of sources before you begin your analysis.
That said, the authors are careful about over claiming. The joint object approach, which only uses galaxies seen in both surveys, does not reach the full potential of a simple union of detections. The silver bullet is not simply cross matching. It is the recognition that the combined catalog across surveys thrusts the effective number density higher, particularly at the faint end where LSST-like depth reveals many galaxies Euclid would miss. But the real dream, the paper argues, lies in pixel level synergy: a joint fit to the shapes of individual galaxies using both data sets at once. Pixel level methods promise to fully exploit the strengths of both data sets, but they require moving beyond the catalog level to more sophisticated joint modeling that keeps blending in view rather than papering over it. That path is more challenging, but it is also where the biggest gains are likely to live in the long run.
To ground the discussion in practical terms, the authors report a concrete milestone: the combined catalog yields an effective number density that is significantly higher than either survey alone, translating into tighter constraints on cosmology in principle. The numbers also reveal a caveat: the gains depend on how many galaxies are detected in either survey and how well we can calibrate the biases that press on those measurements. This is not a call to abandon catalog level synergy; it is a call to pair it with pixel level ambitions and to invest in the realistic simulations that make such ambitious analyses credible.
The paper also tests a more operational dilemma that observers often face: would selecting only the best seeing exposures from LSST like data help alleviate blending? The answer is nuanced. Better seeing reduces the blending imprint, but it also reduces the stacking depth, which increases the background noise and lowers the effective number density. In the end, the gains from better seeing exposures do not compensate for the loss of depth. The sharper PSF of a space based instrument remains a robust remedy to blending, even though it comes with its own practical constraints like shallower depth. The big practical upshot is that blending has to be handled with an honest accounting of the instrument differences rather than a simplistic optimization by seeing alone.
From numbers to future techniques
The authors do not stop at catalog level conclusions. They outline the broader implications for future analyses. The most immediate takeaway is that realistic simulations of blending must be incorporated when evaluating joint survey performance. That means that the next wave of weak lensing studies, which will reach unprecedented statistical power, needs to adopt comprehensive image simulations and bias calibrations that reflect the messy reality of crowded skies. The second major point is that even at the catalog level, combining all galaxies detected by either survey provides a meaningful boost in statistical power. This is a practical guideline for planning analyses today, even as pixel level methods are being developed and tested for future work.
Beyond the numbers lies a methodological takeaway about the scientific process itself. When two very different data sources are melded, their combined strength is not guaranteed. It requires careful handling of biases, high fidelity simulations, and a clear sense of what counts as a gain in precision. The Euclid like and LSST like data pairs exemplify this truth: sharpness and depth are both valuable, but blending is the common adversary that makes the game harder. The study thus offers a pragmatic blueprint for how to navigate the coming era of joint surveys, while also pointing toward more ambitious, pixel level joint modeling that could unlock even more of the synergy in the years ahead.
Conclusion
What this means for cosmology in the next decade is both practical and hopeful. Blending is a real, measurable obstacle, especially for deep, wide field ground based surveys. If we want to extract reliable cosmological information from weak lensing, we must model blending explicitly in simulations and analysis pipelines. The mix of Euclid like sharp imaging and LSST like depth provides a powerful pathway to greater statistical power, but only if we account for how blending biases and detection effects interact with both data streams.
The study concludes with an actionable message: the most effective catalogue level synergy is achieved by combining all galaxies detected in either survey, which yields a notably higher effective number density than either survey alone. The joint catalogue, relying only on overlapping detections, offers a more modest improvement, underscoring that the path to maximal gain lies in embracing the complete joint data set. Yet the authors remain optimistic about pixel level analyses, where a joint fit to individual galaxy shapes across both surveys could unlock a deeper, more efficient use of the complementary strengths. In short, the future of weak lensing lies in the careful, explicit treatment of blending, and in the bold bet that combining two very different eyes on the sky will give us a truer view of the cosmos. This work, rooted in the collaboration between Ruhr University Bochum and Leiden University, led by Shiyang Zhang, maps a thoughtful and practical course for that future.