Myth-busting
GEN AI does not fuel misinformation
Often blamed for fueling misinformation, but the reality is more nuanced...
Rodrigo Wielhouwer
April 21, 2025

Misinformation: Hype, fear, and reality

Generative AI – from chatbots that sound authoritative to image generators that look photorealistic – has stoked fears that it's turbocharging misinformation online. Headlines warn of deepfaked politicians and AI-generated fake news flooding our feeds. In fact, the World Economic Forum's latest Global Risks Report ranks ‘Misinformation and disinformation’ as the number one immediate risk to society and the economy over the next two years.

Today, we'll combine data science, psychology and input from thought leaders to bust the myth that generative AI is single-handedly fueling misinformation. The truth is more complex - and more hopeful - than the doomsayers claim. We'll also explore how misinformation actually spreads, why people believe (and cling to) it, and how we can fight back with smart strategies rather than panic.

The psychology behind misinformation: Why we fall for falsehoods

Misinformation is hardly new – from old rumours and urban legends to modern “fake news,” people have always had a taste for the sensational. Psychology gives us clues as to why. False stories often tap into our emotions and biases: research shows that novel, dramatic lies spread faster and further than boring truths. An MIT study of Twitter, for example, found that false news travels “farther, faster, deeper, and more broadly” than true news. In fact, falsehoods were 70% more likely to be retweeted than the truth and reached audiences up to 6x faster. Why? As one co-author put it, “It’s easier to be novel and surprising when you’re not bound by reality.” False information often plays on salacious or controversial elements that grab attention in ways truth typically cannot. In short, our brains are drawn to the juicy rather than the accurate.

The problem is compounded by how our minds deal with corrections. Even when falsehoods are later debunked, they tend to leave a lingering mental residue. Psychologists call this the “continued influence effect,” where misinformation continues to influence reasoning after it’s been corrected. In other words, once a claim slips into your memory, your brain finds it sticky. We’re also prone to belief perseverance - stubbornly clinging to initial beliefs even when the evidence is disproven. If a story aligned with your prior views or fears, simply hearing “that’s not true” may not fully erase its impact. This is why rumours and conspiracy theories can be so resilient. Our cognitive biases (like confirmation bias, where we favour information that confirms what we want to believe) act as an internal echo chamber, reinforcing the falsehood.

Social dynamics amplify these psychological quirks. In the age of social media, information moves at lightning speed, often unchecked. By one EU survey, 37% of people encounter fake news almost daily, and 83% believe it’s a “danger to democracy”. Networks like Facebook and X (Twitter) create echo chambers where sensational misinformation can thrive on shares and likes. And let’s not forget the profit motive: fake news can be lucrative. As MIT’s Deb Roy noted, “polarization is a great business model” – outrage and clickbait generate ad revenue. In this ecosystem, misinformation spreads globally not just because of AI, but because human psychology and algorithms together favour the dramatic and the divisive.

Image by AmoreSeymour from Pixabay

Generative AI: a new amplifier or an overblown scapegoat?

Enter generative AI – language models that can produce endless text, image models conjuring deepfakes on demand, voice clones imitating anyone. It's easy to see why many worry that these tools will add fuel to the fire. Yes, generative AI reduces the cost and skill required to create misleading content. A malicious actor no longer needs a Hollywood studio to doctor videos - they can ask an AI to generate a fake speech by a CEO or a photorealistic image of an event that never happened. Indeed, a recent analysis by Google DeepMind found the most prevalent misuse of generative AI is creating deepfake images, videos or audio of public figures - roughly twice as common as using AI to fabricate text posts. The top goal in these AI-driven deception cases? To influence public opinion (e.g. sway voters), which was the motive in 27% of incidents studied.

But before we declare truth defeated by technology, let's examine the evidence. Some leading researchers argue that the doomsday scenarios are exaggerated. Felix Simon, an Oxford Internet Institute researcher, recently led an analysis titled “Misinformation Reloaded?” and concluded that fears of generative AI upending the misinformation landscape are largely overblown. Why might AI’s impact be more modest than assumed? For one, increasing the supply of fake content doesn’t automatically mean people will consume more of it. We’re already inundated with more content than we can read or watch; bad actors spewing out even more AI-generated posts may simply end up shouting into the void or preaching to the already converted. Secondly, while AI can make fakes look or sound more “real,” the quality of misinformation needed to fool people isn’t always high. Many viral falsehoods are low-tech - catchy headlines, rumors, or memes - where ultra-realistic AI polish adds little. As Simon notes, most people have minimal exposure to hyper-realistic deepfakes in daily life, and much of the misinformation that actually persuades doesn’t require Hollywood effects. In fact, making a fake more realistic can even conflict with misinformers’ goals - a slick, perfectly factual-sounding fake might paradoxically raise suspicion or lose the emotive punch of a rougher hoax.

We’ve seen this play out in real politics. In 2024’s many elections around the world (over 50 national elections were held), the feared wave of AI deepfake disruption mostly failed to materialise. A report from Meta (Facebook’s parent) noted that less than 1% of the misinformation fact-checkers flagged during the 2024 election cycles was AI-generated content. The vast bulk of fake news was the old-fashioned kind - misleading claims by humans, doctored images made in Photoshop, propaganda spun from half-truths. As one analyst quipped, instead of a deluge of perfect deepfakes, we saw “memes, propaganda and poor quality ‘AI slop’ - none of which turned the tide in any candidate’s favour.” In the world’s biggest election (India 2024), and many others, there was no massive AI plot twist. This isn’t to say AI deception never happens – it does, and we must stay vigilant - but it suggests generative AI is an amplifier of misinformation patterns we already know, rather than an all-new beast. Misinformation was rampant long before ChatGPT, and it usually succeeds due to human factors (like bias, tribalism, social media virality) more than AI magic.

How misinformation spreads - and why debunking it matters

If generative AI isn’t a lone culprit, what does drive the spread of misinformation? As we saw, human psychology and social networks form a potent combination. False narratives often hitch a ride on our emotions - outrage, fear, or even curiosity - prompting us to share impulsively. They also thrive in communities that echo our biases: when we see our friends or colleagues share something, we’re more likely to believe and re-share it. Add algorithms that boost posts with high engagement (often the most emotionally charged posts), and misinformation can go viral in hours.

Debunking is our natural response - fact-checkers, news outlets, and platforms rush to correct the record. But debunking is a tricky art. If done clumsily, it can backfire or be ignored; done well, it can substantially limit a false story’s damage. Timing is crucial: the sooner false information is corrected, the less time it has to root itself in people’s minds. Multiple studies show that a clear correction can reduce belief in misinformation, but rarely erases it completely. This is due to the “mental residue” effect we discussed - even after accepting a correction, people often remember parts of the myth (or the feelings it caused). Therefore, debunkers have learned to not repeat the myth unnecessarily (to avoid reinforcing it) and instead focus on the facts and an alternative explanation. For example, instead of saying “No, there are no enormous structures underneath the Great Pyramid that go kilometers deep into the Earth with coils - that’s a myth,” an effective debunk might say “There are no pillars beneath the Great Pyramid. This rumor started from a unproven study, and extensive research shows that no such evidence has been found.” The latter statement corrects the falsehood and fills the gap with an explanation, which helps the truth stick better.

Image by StockSnap from Pixabay

Navigating the age of AI: tips for outsmarting misinformation

Whether misinformation comes from a state-sponsored troll farm or an overeager AI chatbot, professionals and the public alike can use evidence-based strategies to filter fact from fiction. Here are some practical tips grounded in psychology and media research:

Be sceptical of sensationalism: Pause when a piece of content triggers a strong emotional reaction - excitement, anger, outrage, vindication. Scammers and false-news peddlers often exploit our emotions. As one expert noted, if a news story feels too perfectly tailored to confirm your beliefs (or a conspiracy theory), that’s a red flag to double-check. A healthy dose of skepticism doesn’t mean cynicism; it means not taking things at face value, especially online.

Verify with multiple sources: Don’t rely on a single tweet, post, or AI output for the truth. Cross-check the information via reputable sources. If an image or claim seems shocking, search if credible news outlets are reporting it. Use fact-checking websites (e.g. PolitiFact, Reuters Fact Check, Snopes) to see if the claim has been investigated. Often, a quick search will reveal if something is a known hoax. In the digital age, lateral reading - opening another tab to see what others say about the source or claim - is a superpower. Five minutes of checking can save you (and your organisation) from sharing a costly mistake.

Check the source and context: Consider who is providing the information. Is it an established news organisation, a scientist, a random anonymous account? Look for signs of reliability: Do they cite evidence? Does the website have an About page, contact info, and a track record? Conversely, be wary of screenshots or forwarded messages with no traceable source. Check the date as well - sometimes old news or satire gets recirculated as current truth. And remember to distinguish news vs. opinion: a biased opinion piece isn’t outright misinformation, but it may present selective facts to persuade. Recognising that distinction can help you weigh how much more verification you need.

• Watch for AI “Tells” in content: When dealing with images, video, or audio, be aware of the common artifacts of AI-generated media. These are getting harder to spot, but glitches like unnatural lighting, inconsistent reflections, odd finger counts, or mismatched lip-sync in videos can give away a deepfake. Newer AI models make fewer mistakes, but rarely are AI fakes 100% flawless. If something looks almost too real, zoom in and inspect anomalies - or use reverse image search tools to see if the image is original or has been used elsewhere. Likewise, if you hear a “leaked” audio of a public figure saying something outrageous, consider: is there also video? Could it be an AI voice clone? Until proven otherwise, maintain healthy doubt toward sensational audiovisual “leaks” that lack corroborating evidence.

Mind your cognitive biases: Perhaps the hardest step is an internal one - recognise your own biases and emotional triggers. Misinformation often succeeds by pressing our buttons: we’re quicker to believe falsehoods that validate our worldview or identity. To counter this, practice a moment of reflection: “Am I believing/share this just because I agree with it or find it shocking?” Actively seeking out contrasting viewpoints or devil’s-advocate analyses can also balance our perspective. Even simple awareness of biases (like knowing we all have a tendency to confirm our prior beliefs) can help us slow down and apply more critical thinking.

• Use tools, not just technology: Generative AI may be part of the problem, but ironically, it’s also spawning solutions. AI-driven systems can help detect fake images or flag dubious claims at scale. For example, social media platforms are developing content authenticity algorithms and metadata watermarks to identify AI outputs. As a professional, stay updated on verification tools: there are browser extensions that highlight likely AI text, and image forensics apps that analyse whether a photo has been manipulated. While no tool is foolproof, they can provide useful signals. Ultimately, however, the human in the loop (you!) remains crucial - technology can assist, but not replace, good judgment.

Conclusion: A balanced perspective on AI and misinformation

It's easy to see generative AI as a Pandora's box that will unleash a post-truth era. Indeed, there are real risks - AI can supercharge the scale and speed of misinformation, and bad actors are eagerly experimenting with it. We should take those risks seriously (as regulators and companies now are doing). But as we’ve seen, the narrative “GEN AI fuels misinformation” is an oversimplification. Misinformation is fundamentally a human phenomenon: it exploits our psychology, our social connections, and our institutions (or weaknesses therein). Generative AI is a new tool in the mix - one that amplifies existing problems more than it invents entirely new ones. The good news is that the very same tool can be turned toward solutions (from AI-assisted fact-checking to better educational simulations that teach critical thinking).

So instead of alarmism, a clear-eyed approach is warranted. Yes, be vigilant about AI-generated fakes and demand transparency (e.g. content labels or watermarks) from AI developers. Yes, support policies and initiatives that rein in the malicious use of these technologies (many governments and platforms are now drafting guidelines). But also remember that our brains are the first line of defense. By understanding why we fall for misinformation and implementing savvy strategies to vet information, we can greatly reduce the risk of being misled - whether the lie comes from a troll or a transformer model. As professionals, this means fostering a culture of critical inquiry in our teams and networks: celebrate those who check the facts, and don’t penalise people for pausing to verify rather than being first to share.

In the end, generative AI is just that - generative. It can generate junk, or knowledge; erode trust, or perhaps enhance it - depending on how we choose to use it. The myth that “AI inevitably fuels misinformation” sells short our own firm in shaping the outcome. Armed with data-driven insights and psychological savvy, we could bust the myth and navigate the AI era wisely. The tools for truth are in our hands - let’s use them

Additional sources:

● The Associated Press – AP. (2024, January 10). AI-poweredmisinformation is the world’s biggest short-term threat. https://apnews.com/article/artificial-intelligence-davos-misinformation-disinformation-climate-change-106a1347ca9f987bf71da1f86a141968

● Nature Reviews Psychology. (2022, January 12). Ecker, U.K.H.,Lewandowsky, S., Cook, J.et al.The psychological drivers ofmisinformation belief and its resistance to correction. https://doi.org/10.1038/s44159-021-00006-y

● PudMed Central – PMC. (2023, March 8). Siebert, J., & Siebert, J. U. Effective mitigation of the belief perseverance bias after the retraction of misinformation: Awareness training and counter-speech. PloS one, 18(3), e0282202. https://doi.org/10.1371/journal.pone.0282202

● World Economic Forum – WOF. (2025, January 10). How industrial data can help unleash productivity, innovation and sustainability. https://www.weforum.org/stories/2025/01/deepfakes-different-threat-than-expected/#:~:text=elections%20perilous

● Sol Price School of Public Policy – USCPrice. (2024, July 2). How to spot AI fake news – and what policymakers can do to help. https://priceschool.usc.edu/news/ai-election-disinformation-biden-california-europe/#:~:text=,should%20be%20a%20red%20flag

Blog

Latest from Datafeed

Subscribe to our blog for the latest trends in data science and psychology to strategies for solving industry challenges. We bring you thought leadership, best practices and forward-looking perspectives from our team and partners.
View all posts
Myth-busting

GEN AI does not fuel misinformation

Often blamed for fueling misinformation, but the reality is more nuanced...
Read post
Interview

What AI can’t replace

We asked three experts to weigh in on the future of work...
Read post
Data insights

Bridging the data-to-action gap

Why clarity matters in organisational change...
Read post
Forecast

From overload to opportunity

Global data is spiralling—will you harness its potential or drown in the noise?
Read post
Myth-busting

More data ≠ better decisions

Why more data doesn't lead to better decisions...
Read post
Stay informed with the latest insights
Subscribe to our blog to receive updates on emerging trends, practical strategies, and forward-looking perspectives tailored for decision makers.