Artificial intelligence can also generate videos from text: this is Meta’s Make-A-Video

goal just introduced make-a-video, an artificial intelligence (AI) system that creates short video scenes from written directions. Is this mechanic familiar to you? If your answer is yes, you probably know the current phenomenon of image generators from text.

DALL-E, Midjourney, and Stable Diffusion are some of the most popular AI-powered programs at the moment, but many other projects are developing in parallel, some of which are also moving towards video generation from text.

Meta believes that Make-A-Video will be a valuable tool for artists

Make-A-Video is one of the latest works from Meta’s artificial intelligence lab. Investigators from the parent company of Facebook and Instagram have been working with AI modelsIn fact, they have their own text-based image generator called Make-A-Scene.

Now, the laboratory has taken an important step forward with Make-A-Video, at least according to Mark Zuckerberg, who assures that “it is much more difficult to generate videos than photos because, in addition to correctly generate each pixelthe system also has to predict how they will change over time.”

In the video above we can see the performance of this artificial intelligence system with the following text inputs: “A teddy bear painting a portrait”, “a robot dancing in Times Square” and “a cat watching television with a remote control”. remote in hand. The result is very interesting.

According to a whitepaper, like many other models, Make-A-Video’s AI model has been trained on two large datasets collected from the web that include the work of creators who receive nothing in return. WebVid-10M, with 52,000 hours of video, and HD-VILA-100M, with 3.3 million videos.

SEE ALSO  Android Auto in your Tesla: the curious device to change the operating system in a matter of minutes

The issue of copyright and image and video generation models is beginning to come to the fore. Getty and other image banks have banned AI-generated stock images. Meta, however, believes that both Make-A-Scene and Make-A-Video will become one invaluable tool for creators and artists.

The aforementioned imaging models are not yet available to users. goal ensures that will launch a demo. DALL-E 2, which for a long time was limited to some users, this week removed its waiting list and is now available to everyone.

If you’re interested in trying Make-A-Video you can fill out a Meta form to receive an email when it’s finally available. At the moment we do not know when it will arrive and if it will initially have a waiting list. It’s time to wait to be able to try first-hand the possibilities of this system.

In Xataka | I have asked the DALL-E Mini to show Steve Jobs eating an apple and I have just understood why this AI is essential