Meta’s Movie Gen Makes Convincing AI Video Clips

Posted on:
Key Points

Meta just announced its own media-focused AI model, called Movie Gen, that can be used to generate realistic video and audioclips...

While the tool is not yet available for use, this Movie Gen announcement comes shortly after its Meta Connect event, which showcased new and refreshed hardware and the latest version of its large language model, Llama 3.2...

Going beyond the generation of straightforward text-to-video clips, the Movie Gen model can make targeted edits to an existing clip, like adding an object into someones hands or changing the appearance of a surface..

(A model's parameter count roughly corresponds to how capable it is; by contrast, the largest variant of Llama 3.1 has 405 billion parameters.) Movie Gen can produce high-definition videos up to 16 seconds long, and Meta claims that it outperforms competitive models in overall video quality...

The sources of training data and whats fair to scrape from the web remain a contentious issue for generative AI tools, and it's rarely ever public knowledge what text, video, or audioclips were used to create any of the major models...