Meta also wants to have its own artificial intelligence (AI) imaging, like the popular “DALL-E”, although in the case of the Facebook matrix, its intentions go further.
Meta’s AI would allow generating immersive virtual worlds from a user’s sketch
If “DALL-E” needs only a few words to generate a few variants of the image suggested by a brief description, the purpose of “Make-a-Scene”, as Meta’s illustrator AI is called, is more ambitious.
And it is that since the great ambition of Meta is the metaverse, the purpose of «Make-a-Scene» would be to generate virtual immersive worlds following user instructionseven if they lack the talent to create such environments.
Going a step further, “Make-a-Scene” would be able to generate said worlds from a sketch presented by the user. Beyond the simple textual description such as the one that serves as the basis for «DALL-E», in the Meta AI it would appear a window in which to define the position and size of the objects. Added to a simple sketch, the result would be closer to what the user wants to obtain as a final result. You can also receive color cues and descriptions of key items and objects.
Where «Make-a-Scene» would differ from «DALL-E» is in the final photorealistic result, since the intention of the Meta AI is to get closer to an artistic field so that the result almost looks hand painted.
In terms of resolution, “Make-a-Scene” works at 2,048×2,048 pixels, but at the moment it will not be easy to test its operation since is in closed testing phase. This implies that only those who have received a specific invitation from Meta will be able to work with this AI. So far, a date on which this AI will be officially presented has not been announced.