Runway, a developer of content creation tools using Artificial Intelligence, has just announced that users can now use GEN-2, the new version of the generative AI video generation model.
GEN-2 was officially introduced about three months ago as a multimodal system whereby, in addition to the text-to-video generation mode, users will also have seven additional modes to make their creations.
A multimodal system to embrace new creative possibilities
These new modes are Text + Image to Video, Image to Video (allowing for different variations), Stylization (allows you to transfer the style from any image or from a text cue to any frame of the video), Storyboard (allows you to turn mockups into stylized videos and animated), Mask mode (to isolate video elements and modify them as desired), Render Mode (allows you to use texture-free rendering to generate realistic video based on an input image or text), and Personalization (allows you to customize the AI model for more reliable results).
Available on both web and mobile
Users already have GEN-2 at their disposal, both through the web and also through the mobile application available for the iOS platform.
In any of the cases, they must have a free Runway user account to log in and be able to start, being able to use the credits that the platform itself offers them from the start to test its tools.
If necessary, users can purchase additional credits, using a monthly plan, which houses a number of credits, although if these credits are insufficient, they can also purchase additional credits.
The generated videos will be completely owned by the users.
Runway promises that the resulting videos generated will be fully owned by the users themselves, even though the results will only be a few seconds long. With today’s step, Runway is advancing into the segment of generative AI applied to video creation, along with many other “magical” video-related tools.
More information: Runway
Image: User Panel / Credit: Runway