When Colored Lights Lie Your Eyes Into Seeing Shadows Differently

Why Lighting Tricks Our Cameras and Computers

We live in a world painted by light. The colors we see, the shadows that dance across surfaces, and the subtle textures that define objects all depend on how light interacts with the environment. But this seemingly simple relationship between light and what we see is anything but straightforward. When multiple colored lights illuminate a scene—think of a nightclub bathed in red, blue, and green spotlights—the resulting image is a complex puzzle of hues, shadows, and reflections. For humans, our brains effortlessly untangle this mess, but for cameras and computer algorithms, it’s a confounding challenge.

Researchers at the University of Würzburg’s Computer Vision Lab, led by Florin-Alexandru Vasluianu and colleagues, have taken a bold step toward solving this puzzle. Their work dives deep into how to restore images taken under multiple colored light sources back to a neutral, evenly lit state—what they call Ambient Lighting Normalization. This isn’t just about making photos look prettier; it’s about enabling machines to understand and interpret images accurately, regardless of the lighting chaos.

The Problem With Colored Lights and Shadows

Imagine you’re photographing a scene illuminated by three different colored lights—red from the left, blue from the right, and green from above. The colors mix and mingle on surfaces, creating shadows that aren’t just darker but tinted with strange hues. Traditional image processing methods often assume a single, white light source, which simplifies calculations but falls short in real-world scenarios. This simplification leads to artifacts like inconsistent lighting patches, color distortions, and even texture leakage where the surface details get muddled.

Why does this matter? Because many applications—from facial recognition and autonomous vehicles to augmented reality and AI-generated imagery—rely on consistent, reliable image data. If the lighting tricks the algorithm, the whole system can falter.

Introducing CL3AN: A Dataset That Sees the Light Differently

One of the biggest hurdles in tackling this problem has been the lack of comprehensive data. Existing datasets mostly feature scenes lit by white or natural light, missing the rich complexity of colored, multi-source lighting. To fill this gap, the team at Würzburg created CL3AN, a groundbreaking dataset capturing 105 cluttered scenes under multiple colored directional lights, paired with perfectly ambient-lit reference images.

Think of CL3AN as a photographic laboratory where every scene is shot three ways: under white light, under colored lights, and under uniform ambient lighting. This triplet setup allows researchers to systematically study how colored lighting warps images and how to reverse those effects. The scenes include a variety of materials—shiny metals, translucent plastics, rough fabrics—each interacting with light in its own unique way.

RLN2: Learning to See Through the Colorful Fog

With CL3AN in hand, the researchers developed RLN2, a novel algorithm inspired by the Retinex theory—a classic model from the 1960s that explains how human vision separates illumination from reflectance. RLN2 doesn’t just treat the image as a flat canvas; it breaks down the problem into two intertwined streams:

Reflectance—the inherent color and texture of the objects, and Illumination—the color and intensity of the light hitting those objects.

By working in the HSV color space (Hue, Saturation, Value), RLN2 cleverly uses the Value channel as a guide for illumination and the Hue-Saturation channels to refine reflectance. This dual-stream approach allows the model to disentangle the messy interplay of colored lights and shadows more precisely than previous methods.

Moreover, RLN2 employs advanced attention mechanisms that focus on the most relevant features, filtering out noise and irrelevant details. This results in images that are not only corrected for color but also preserve fine textures and details, avoiding the common pitfalls of over-smoothing or color bleeding.

Why This Breakthrough Matters

The implications of this work ripple across many fields. For photographers and filmmakers, it means better tools to correct complex lighting without expensive setups or tedious manual editing. For AI systems, it means more reliable image inputs, leading to improved object recognition, scene understanding, and even more realistic image synthesis.

In the realm of neural image editing, RLN2 can serve as a powerful preprocessing step, normalizing lighting so that subsequent edits or relighting operations behave more naturally. The team demonstrated this by applying RLN2 outputs to AI-generated images, showing clearer layers and more consistent shadows.

Surprising Insights and Future Directions

One of the most striking findings is how much better RLN2 performs compared to state-of-the-art models, even those with higher computational costs. By embedding physical principles of light and color into the learning process, the model achieves superior results with less computational overhead—a rare win in the world of deep learning.

Another insight is the importance of frequency-domain information (think of it as analyzing the image’s texture and patterns at different scales) combined with spatial features. RLN2’s architecture smartly fuses these perspectives, leading to sharper, more natural restorations.

Looking ahead, the CL3AN dataset opens doors for further research into complex lighting scenarios, including outdoor scenes with mixed natural and artificial lights, or dynamic lighting changes in video. RLN2’s framework could also inspire new algorithms that better mimic human visual perception, bridging the gap between how we see and how machines interpret images.

In a World Painted by Light, Understanding Color Is Key

Light is the brushstroke of reality, and color is its language. The work from the University of Würzburg’s Computer Vision Lab reminds us that to truly understand images—whether captured by cameras or generated by AI—we must first decode the complex dialogue between light and matter. By building better datasets and smarter algorithms like CL3AN and RLN2, we move closer to machines that see the world as richly and accurately as we do, no matter how wild the lighting gets.

For those curious to explore or build upon this work, the team has made their dataset, code, and models publicly available at github.com/fvasluianu97/RLN2.