Mastodon
OpenAI Flagged Canada Shooter Months Before Attack: AI Safety Debate Ignites 🔍🇨🇦

OpenAI Flagged Canada Shooter Months Before Attack: AI Safety Debate Ignites 🔍🇨🇦

In a chilling revelation shaking tech and law enforcement circles, OpenAI identified violent ChatGPT conversations linked to Jesse Van Rootselaar – the suspect in last week’s Tumbler Ridge school shooting – over a year before the tragedy. 🚨

The company banned his account in 2025 after AI safety systems detected graphic violent scenarios, internal documents show. While employees debated alerting authorities, OpenAI ultimately prioritized user privacy, opting to remove the account instead. 💻⚖️

"We constantly balance safety with ethical responsibility," spokesperson Kayla Wood told media, as critics question if earlier intervention could’ve prevented the attack that left 8 dead in rural British Columbia.

RCMP investigators confirmed OpenAI shared critical digital evidence post-attack, including the suspect’s chatbot activity. 🕵️♂️ "Every digital breadcrumb matters," said Staff Sgt. Kris Clark as police analyze terabytes of data through a new public evidence portal.

The tragedy reignites global debates: Should AI companies act as digital watchdogs? How much responsibility do platforms bear? 🤖⚡️ With U.S. and EU regulators already drafting new AI oversight laws, this case could redefine tech accountability worldwide.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top