Mastodon
AI News Assistants Struggle with Accuracy, Study Reveals 🚨

AI News Assistants Struggle with Accuracy, Study Reveals 🚨

Your go-to AI chatbot might not be the reliable news guru you think it is. A new study by the European Broadcasting Union (EBU) and the BBC found that nearly half of responses from popular AI assistants like ChatGPT, Gemini, and Copilot contained significant errors about current events. 😬

Key Findings: Fact or Fiction?

Researchers analyzed 3,000 news-related answers across 14 languages. Shockingly, 45% had major issues—think ‘Pope Francis is still alive’ levels of wrong. Gemini, Google’s AI, had sourcing errors in 72% of responses, while others averaged 25%. Talk about a game of broken telephone! 📞💥

Why It Matters for Gen Z

With 15% of under-25s using AI for news (per Reuters), mistakes like misquoting laws or citing outdated info could fuel distrust. ‘If people don’t know what to trust, they trust nothing,’ warns EBU’s Jean Philip De Tender. Cue the existential crisis for democracy. 🌍⚖️

Tech Giants Respond

Google says Gemini is a work in progress, while OpenAI and Microsoft admit they’re battling ‘hallucinations’ (AI’s fancy term for making stuff up). Perplexity claims its ‘Deep Research’ mode is 93.9% accurate—but the study begs to differ. 🤖🔍

As AI becomes the new Google, experts urge companies to step up. After all, who wants their news served with a side of fiction? 📰✨

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top