Mastodon

AI Transcription Tool Caught Making Up Fake Quotes 🤖💬

OpenAI's AI-powered transcription tool Whisper, praised for its 'human-level accuracy', is under fire for inventing text that was never spoken—from racial remarks to fake medical advice. 🚨 Experts warn these hallucinations could have real-world consequences as the tech spreads across industries.

When AI Gets Too Creative

Whisper, used globally for translating interviews, generating subtitles, and even transcribing doctor-patient chats, has a habit of making things up. Software engineers and researchers say the tool adds entire sentences that don’t exist in the original audio. Imagine your doctor’s notes including a non-existent treatment! 😱

Medical Fields at Risk

Despite OpenAI’s warnings to avoid using Whisper in 'high-risk' settings like healthcare, some hospitals are adopting it. Think: \"Your diagnosis is… something the AI invented?\" 🤯 One University of Michigan researcher found errors in 80% of public meeting transcripts before tweaking the model.

How Bad Is It?

  • 👨💻 A developer found made-up text in nearly all 26,000 transcripts they tested.
  • 📊 Over 13,000 clear audio clips still had 187 hallucinations.
  • 🌍 At scale, this could mean millions of errors worldwide.

As one engineer put it: \"AI’s creativity is cool until it starts rewriting reality.\" 🔥

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top