China has rolled out strict new regulations requiring all AI-generated content to be clearly labeled, aiming to combat misinformation and protect digital authenticity. Effective this week, platforms must flag text, images, audio, and video created by AI—whether through explicit labels or metadata checks. 🛡️
The rules, backed by the Cyberspace Administration of China (CAC), introduce a three-tier review system. Platforms must now scan content before publishing, tag confirmed AI material, and add warnings for suspected AIGC. Zhang Jiyu, a legal tech expert at Renmin University, highlighted the challenges: "If AI markers are detected, label it. If it’s just a hunch, flag it as ‘suspected.’" But he also stressed safeguards for creators to avoid wrongful tagging. ⚖️
This isn’t China’s first move to rein in AI risks. Earlier this year, a crackdown targeted deepfakes and unlabeled AI content, resulting in over 960,000 pieces of harmful material being scrubbed from the web. Globally, the push for transparency is growing—China’s 2023 deep synthesis rules were among the world’s earliest AI labeling mandates. 🌍
At July’s World AI Conference in Shanghai, Geoffrey Hinton, the so-called "godfather of AI," likened AI to raising a tiger: "It’s cute now, but we need to ensure it doesn’t eat us later." With these regulations, China’s betting on training the tiger—not eliminating it. 🐅
Reference(s):
cgtn.com