Google has announced a new tool, called Transframer, that can create short videos based on a single image. It uses Google’s DeepMind AI platform, focused on the development of artificial intelligence machines. The technology is important to augment traditional rendering solutions, allowing developers to be able to create virtual environments based on machine learning capabilities.
These are 30-second videos that expand the concept of reproducing images from the GIF format. It starts from an image of a corridor, a piece of furniture or a street and provides different ways of representing the object.
Transframer has technology similar to the Transformer feature, launched in 2017, which generates text by comparing it to other words in a sentence. In the case of Google’s new invention, from a previously consulted image bank, it is possible to set up an image scenario and create short videos.
Transframer is a general-purpose generative framework that can handle many image and video tasks in a probabilistic setting. New work shows it excels in video prediction and view synthesis, and can generate 30s videos from a single image: https://t.co/wX3nrrYEEa 1/ pic.twitter.com/gQk6f9nZyg
— DeepMind (@DeepMind) August 15, 2022
The final result allows the user to see the scenery around the image, even if the original record did not offer any data on the continuity of the landscape, for example. Based on the context images, AI can predict different angles of the same image.
It is interesting to point out that this new video technology is able to make these predictions with a limited set of data, which can bring advances in semantic segmentation, image classification and optical flow predictions. For the gaming industry, the gains are varied, such as basic rendering techniques such as shading, texture mapping, depth of field, and ray tracing. In short, with Transframer it is possible to build a virtual scenographic environment in reduced time.