Mastodon
AI Expert Warns: Superintelligence Could Escape Human Control 🌐🤖 video poster

AI Expert Warns: Superintelligence Could Escape Human Control 🌐🤖

MIT physicist Max Tegmark dropped a truth bomb at Lisbon's Web Summit this week: AI development is racing ahead without safety nets. 🔥 Speaking with RAZOR, he warned that companies are building systems that could surpass human intelligence by 2025 – potentially creating machines that "learn, adapt, and act autonomously" beyond our control.

While current AI helps write emails or recommend songs 🎧, researchers are pushing toward Artificial General Intelligence (AGI) – systems with human-like reasoning. The real red alert? Tegmark says combining superintelligence with physical autonomy could create "machines making decisions we can't understand or stop." 🚨

Here's the twist: Your toaster undergoes more safety testing than cutting-edge AI. 🍞 Tegmark argues for aviation-style regulations: "Why does AI get a free pass when lives are at stake?" His Future of Life Institute (founded in 2014) is rallying global experts to push for mandatory safeguards.

But it's not all doomscrolling! 📱 The scientist remains hopeful, noting growing public awareness and cross-border collaboration. With smart regulations, he believes AI could "cure diseases and solve climate change without turning into Skynet." 💊🌱

Dive deeper with RAZOR:
AI vs Wildfires
Saving Salmon with Algorithms

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top