Mastodon
AI Safety Check! 🤖 US Government to Vet New Models Before Public Release

AI Safety Check! 🤖 US Government to Vet New Models Before Public Release

Hold up! Just when we thought AI was moving at warp speed with zero brakes, the US government is stepping in. In a major policy shift announced this past Tuesday, the US is moving to assess new artificial intelligence models before they even hit the market. 🛡️

For a while, things were pretty "hands-off" in Silicon Valley, allowing tech giants to roll out life-changing tech at a breakneck pace. But the tide is turning. The government has now inked deals with big players like Google DeepMind, Microsoft, and xAI to get a sneak peek and evaluate their newest models before the rest of us get our hands on them.

So, what sparked this sudden change of heart? Enter Mythos. 🚀 This powerhouse model from the San Francisco start-up Anthropic is so good at finding software security vulnerabilities that it's practically a cybersecurity wake-up call. In fact, Anthropic decided it was too risky to release publicly for now. Word on the street is that the National Security Agency (NSA) has already gained access to Mythos to run some serious tests.

The heavy lifting will be handled by the Center for AI Standards and Innovation (CAISI), part of the Commerce Department. CAISI (which replaced the AI Safety Institute) is tasked with conducting pre-deployment evaluations to make sure frontier AI doesn't accidentally break the internet or compromise national security.

This move is a bit of a political remix. While some of these partnerships started under the Biden administration, they've been renegotiated under Donald Trump. There's even talk of a new executive order to create a working group featuring both tech execs and government officials to streamline how these reviews happen. 📝

As Chris Fall, Director of CAISI, put it, rigorous science is essential to understanding the security implications of these "frontier" AI tools. For the rest of us, it means the AI tools we use in the future might be a bit more vetted—and hopefully a lot safer! ✨

Back To Top