Online disinformation is not just a matter of wrong facts; it’s an emotion machine. Anger, outrage, and fear propel shares faster than any truth check can catch up. In Japan, researchers at the Institute of Information Security in Yokohama have tested a new approach that targets the moment of sharing, not the post after it has gone viral. Led by Haruka Nakajima Suzuki and Midori Inaba, the study asks whether digital nudges that address emotion regulation can cool the heat behind disinformation fast enough to slow its spread.
Rather than feeding users more warnings about accuracy or simply pausing the action, the team designed prompts that surface the emotions tied to a post and then offer ways to regulate those feelings. The aim is to spark deliberation at the exact moment a post would be shared, nudging people toward a calmer, more reflective choice without stripping away autonomy.
What the study set out to do
The core question is deceptively simple: when anger is a primary driver of sharing, can we nudge people to think before they click? The researchers hypothesized that making users aware of the emotional content of a post and coupling that with techniques drawn from emotion regulation could reduce the likelihood of sharing driven by strong anger.
To test this idea, they built digital nudges that mimic typical platform prompts but add two ingredients. First, they display emotional information about the post, showing what emotions are triggered and their estimated share of the emotional mix. Second, they present messages designed to regulate those emotions, nudging users toward deliberate reflection rather than quick impulse. They compared these with conventional friction-based nudges that simply slow down the sharing flow or encourage users to add a comment before reposting.
Deliberation at the moment of sharing is the hinge on which these nudges turn.
Designing digital nudges that regulate emotion
In the design, nine nudges were created, each shaped to resemble familiar sharing controls on a social platform. The nudges either show emotional information in a visual form or couple that information with emotion regulation tips. The emotional information was presented as a pie chart that identified emotion types such as anger, surprise, sadness, and disgust and highlighted the top emotions with vivid colors while greying out the rest. The idea was to draw attention to the emotional content without burying users in jargon.
The emotion regulation messages drew from established strategies such as distraction, reappraisal, perspective-taking, and empathic response. Examples include prompts that invite readers to imagine followers who always smile when they see the post, or to adopt an objective, analytical view of the post’s intent. The nudges were labeled A1, D1, D2, D3, R1, R2, P1, P2, E1, with D3 and P2 receiving particular emphasis in the main study as the most promising combinations of emotion awareness and regulation.
The distraction plus compassion approach stood out in reducing disinformation sharing.
What happened in the experiments
The researchers ran a two-stage program. A pilot study with 47 participants tested the nine nudges with stimuli crafted to provoke anger about gender and generational conflicts. The stimuli mirrored disinformation cases drawn from earlier work and embedded true information controls. Participants read a post and then encountered a random nudge, answering how they would respond and noting the emotion they recognized and its intensity.
The pilot suggested that nudges pairing emotional information with emotion regulation messages could reduce the urge to share, especially when the prompt asked readers to imagine followers who would react with smiles. The team then scaled up to a main study with 400 adults in Japan who regularly use X. They presented ten stimuli clustered around two topics and randomly assigned participants to different nudges, then measured sharing intentions, emotion type, emotion intensity, and authenticity beliefs before and after nudging.
The findings were clear enough to singe memory. All nudges reduced sharing compared with the pre-nudge baseline, and the distraction nudge in particular reduced sharing more than the existing friction-based approaches. Specifically, among participants who initially indicated they would share disinformation, about two-thirds continued to share after a distraction nudge, while roughly 70% continued after perspective-taking nudges and roughly 78% after existing nudges. In other words, the distraction prompt nudged people more toward canceling or commenting rather than reposting.
They found that the emotional intensity dropped on average from around 7 to about 6 on a 0-to-10 scale after nudging, a sign that the prompts nudged people toward calmer processing. Among participants whose emotions weakened after the nudge, cancels rose, particularly with distraction nudges. The impact varied by the content of the post: distraction nudges reduced sharing across all disinformation stimuli, while perspective-taking nudges affected only some stimulus types, and existing friction nudges hit others.
For the overall effects on emotion, distraction nudges showed the strongest pattern: they not only lowered the urge to share but also nudged some people toward more positive reactions, such as anticipation or trust, rather than anger or fear. A small slice of participants even reported a shift from negative to positive emotions after nudging, a rare but intriguing sign that the design can redirect emotional trajectories in real time.
Why it matters for truth on the web
These results point to a new class of interventions that work with, not against, human emotion. Traditional countermeasures—fact checks, warnings about accuracy, or delays—often treat anger as a nuisance to be neutralized after the fact. The approach here aims to nudge the emotional engine itself, harnessing emotion regulation to cultivate deliberation at the moment of sharing.
Crucially, the design preserves user autonomy. The prompts do not compel a particular belief; they gently invite a pause, a moment to weigh whether the post is worth sharing, and why. If this approach scales to real platforms, it could complement education and fact-checking rather than supplant them, offering a practical tool to reduce the social spillover of disinformation that blends truth with manipulation.
The work comes from the Institute of Information Security in Yokohama, in Japan, led by Haruka Nakajima Suzuki and Midori Inaba. The researchers acknowledge that real-world deployment would need to account for cultural differences, platform structure, and the messy, dynamic nature of online discourse. Still, the core insight stands: surface the emotion, coach the emotion, and give people a tiny, humane pause at the moment they decide what to share.