Meta just dropped a game-changing AI model, Movie Gen, that can create realistic video clips with synchronized sound 🎥🔊. Think of it as ChatGPT for videos: type a prompt, and voilà—you get a 16-second clip (or 45 seconds of audio) featuring anything from surfing penguins to a Bob Ross-style painting session. 🐧🎨
Why It’s Lit
Meta’s demo reel shows AI-generated clips rivaling tools from OpenAI, ElevenLabs, and Runway. The kicker? It can also edit existing videos and craft background music that matches the action 🎶. Blind tests reportedly favor Movie Gen over competitors—but Hollywood’s not all cheers.
Hollywood’s AI Love-Hate Saga
As studios like Lionsgate (Hunger Games, Twilight) partner with AI startups, creatives are split. Some see faster filmmaking; others fear copyright chaos 🤯. Remember Scarlett Johansson’s voice clone drama with OpenAI? That energy’s still lingering.
Meta’s Play Safe Strategy
Unlike its open-source Llama models, Meta’s keeping Movie Gen under wraps 🔒. They’re collaborating directly with filmmakers and plan to bake it into Meta apps by 2025. No public release for now—risks like deepfakes in elections (👀 U.S., India, Pakistan) are too hot.
Behind the Scenes
Meta trained Movie Gen on licensed and public data—details? Classified 🤫. Meanwhile, OpenAI’s Sora is schmoozing Hollywood execs, but no deals yet. The AI arms race just got a Hollywood script twist… 🍿
Reference(s):
cgtn.com