Images carry more weight than captions. They freeze moments, guide opinions, and sometimes decide who wins a vote. In war, pictures do more than illustrate events; they shape memory, justify actions, and steer responses. The old adage that a picture is worth a thousand words is true in war, but those words often carry a pressure that endures long after the moment has passed. The question this new study raises is not whether images can be powerful, but who controls the image—and who interprets it when the image itself can be faked, reimagined, or mass-produced by machines.
A new study from the Cyber-Defence Campus, led by Raphael Meier, maps the threats and opportunities of AI-generated images in armed conflict. The paper asks two central questions: what are AI-generated visuals good for in military contexts, and what new risks do they introduce to an already fragile information environment? It isn’t a science-fiction scenario; it’s about the everyday stream of visuals that travels across feeds, newsrooms, and command posts—images that can inform decisions or influence beliefs in real time. Two essential roles sit at the heart of this story: images as signals that report what’s there, and images as instruments that shape how people think about what’s there.
The authors describe AI-generated imagery as a new kind of force multiplier that can affect both senses and cognition. Images can act as sensors, reporting what’s present on the ground, and as catalysts that shape what people believe about those findings. With modern diffusion models and text prompts, this dual power is amplified: you can produce plausible visuals at scale, tailor them to a message, and do it without the logistics of a real photo shoot. That combination changes not just how wars look, but how they are fought—by altering what people see, how they interpret it, and how quickly judgments are made in war rooms, on social feeds, and at the ballot box.
Images through the ages and their power in war
Images have always shaped how we understand conflict. Before cameras, combat was told through drawings, paintings, and carvings—artifacts that carried a claim about what happened and who benefited. When photography arrived in the 19th century, pictures transformed into a new kind of witness: immediate, portable, and reproducible. In military terms, photographs soon served two roles at once: sensors that record what is there and propaganda that shapes what people think about it. The early genome of image power was born here: evidence you could hold and stories you could spread.
In the early 20th century, photographs rode with reconnaissance balloons and airplanes, turning the sky into a map of dispositions. Print technology then multiplied those images, spreading them to millions of readers. The two roles persisted: a visual record that could inform decisions and a visual instrument that could influence opinion. The paper traces that arc from Crimean War photographs to World War I aerial mapping and onto early cinematic propaganda, where visuals were deliberately staged, edited, or amplified to guide audiences. The story is not simply about what is true or false on a given image; it’s about how images become evidence, how they travel, and how people decide what to believe when all the pieces look convincing.
Then came the digital era, where mass media and especially social media accelerated everything. Images leaped across platforms, sometimes losing track of the original source. The report calls this information laundering—a chain of transmission that blurs attribution and increases credibility through repetition. In this new information environment, synthetic visuals lie in wait: convincing, cheap to produce, and reachable by anyone with a phone. The era doesn’t just threaten authenticity; it invites new strategies for deception that can outpace traditional gatekeeping. Across platforms, the same image can be repeated, remixed, and embedded in a way that makes it feel real even when it isn’t—and that’s precisely what makes AI-generated visuals so potentially destabilizing in a democratic information ecosystem.
Synthetic images and the new battlefield logic
The core idea the paper pushes is that AI-generated images offer two powerful advantages over traditional photography: conditioning and scalability. Through text prompts and adjustable knobs in diffusion models, a user can steer not only what an image shows but how it feels—real, alarming, hopeful, or eerie. Each prompt defines a point in a vast space of possible images, and tiny tweaks can yield radically different visuals. This is the mechanism of conditioning: AI images respond to human intent in a way that physical photography does not, making it easier to tailor visuals to specific audiences or narratives. The space of potential images is so large that a single idea can spawn dozens of variations within minutes, all sharing a common message but arriving with different emphases or emotional charges.
That same mechanism enables scale. If you can generate an image in seconds, you can produce dozens, hundreds, or millions of variants in a matter of hours. The result is a new tactical grammar for information warfare: floods of visuals that differ only slightly can saturate channels, challenge analysts, and create an atmosphere of uncertainty. The paper labels these ideas as tacit tenets—mass, tempo, surprise, and shock—that together let a prepared actor alter the pace of events by shaping what people see and how they think about it. The promise is seductive: you can flood the information space with targeted visuals at a pace that outstrips traditional media cycles, potentially changing how a battle feels before any exchange of bullets occurs.
Yet there are limits. AI-generated content isn’t magic; it’s a product of prompts, models, and datasets that can produce visual artefacts—distortions, uncanny geometry, or odd anatomy. The authors stress that a human in the loop remains essential for judging fit for purpose and for catching subtle faults. They also point to a growing toolkit of techniques to improve control over image content, such as methods that map specific prompts to bounding boxes in the image, reducing “concept fusion” that can blur details. The bottom line is that synthetic images are incredibly powerful, but they require careful use and skilled oversight to avoid glaring mistakes that give them away or undermine credibility. If you push a model hard without checks, you may end up with visuals that feel off, immediately signaling fakery even to casual viewers who notice fine-grained inconsistencies.
Opportunities, threats, and what to do about it
The paper is a playbook for readiness more than a panic button. On the upside, AI-generated imagery can enrich training, simulation, and wargaming by supplying fresh visuals of assets, theaters, and scenarios. They can also generate synthetic data to train machine-learning models for tasks such as detecting ships, aircraft, or drones in challenging environments. But the authors caution that synthetic data is not universally better: several studies show synthetic data can underperform real data if not used carefully, and transfer from synthetic to real domains often requires expert tuning. The message is pragmatic: synthetic data is a tool that can help, but it’s one tool among many, and its benefits hinge on context, quality control, and human expertise.
Strategic communications is another frontier. AI visuals offer flexibility and cost-savings for messaging, recruitment, and media outreach. Yet the same tools heighten the risk of appearing inauthentic or manipulative, which can erode trust. The science on audience perception matters here: while visuals can amplify a message, audiences punish deception, and transparency about AI use can influence credibility. The takeaway is nuanced: AI-generated content can be useful, but it should be deployed with discipline, not as a shortcut to gaslighting the public. In short, the line between legitimate digital storytelling and manipulative fabrication is thinner than many assume, and audiences are increasingly sensitive to the tells of synthetic media.
Finally, the report calls for concrete defenses. It urges continuous monitoring of generative AI advances, red-team testing of detection and countermeasures, and education to help people distinguish AI-generated from real imagery. It also warns about open-source vulnerabilities, rogue extensions, and supply-chain risks that could slip into military workflows. In essence, AI-generated images will be a defining tool of modern information warfare—best used with guardrails, clear rules of engagement, and a commitment to truth-seeking in a noisy, image-rich world. The authors argue for a balanced approach: embrace the productive uses in training and analysis, but build robust detection, transparent communication, and disciplined use in public messaging to maintain legitimacy and trust.