Canadian officials have issued an ultimatum to OpenAI: strengthen AI safety protocols immediately or face government intervention. This comes after revelations that ChatGPT failed to report a banned account linked to February’s tragic school shooting in British Columbia that left 8 dead. 🔍
The Incident That Sparked the Debate
18-year-old Jesse Van Rootselaar, who police say had a history of mental health challenges, allegedly carried out the attack in Tumbler Ridge on February 10 before taking their own life. OpenAI confirmed it banned their account in 2025 for policy violations but didn’t alert authorities, claiming it didn’t meet internal risk thresholds.
Government Turns Up the Heat
Justice Minister Sean Fraser stated: "We’ve made clear that changes must happen fast – if not, we’ll legislate them." The warning follows Tuesday’s emergency meeting between Ottawa and OpenAI’s safety team. 🤖⚖️
Broader Tech Regulation Push
This clash comes as Canada revives efforts to combat online hate after 2024’s stalled legislation. AI Minister Evan Solomon emphasized: "If any company can prevent future tragedies, they must act." Critics argue both tech firms and authorities missed warning signs – police had temporarily confiscated Van Rootselaar’s firearms before the attack.
OpenAI says it’s working on updated safety measures, but with public pressure mounting, 2026 is shaping up to be a pivotal year for AI accountability. 🌐✨
Reference(s):
Canada tells OpenAI to boost safety measures or face government action
cgtn.com








