OpenIA gives the most anticipated news: at the end of the year we will be able to enjoy its AI for videos

openia gives the most anticipated news at the end of.jpg
openia gives the most anticipated news at the end of.jpg

sora video image

A little over a month ago, OpenIA surprised everyone with its latest Artificial Intelligence: Sora. A tool that allows us to create videos from text with a quality never seen before. Its launch, however, had been very restricted for a select number of users. Now, finally, we will know when the final version will arrive generally.

Sora has been one of the most notable developments in Artificial Intelligence at the beginning of 2024. A new technology that allows us, through text instructions, to create videos of a maximum of 60 seconds with surprising quality. Presented just a month ago, its launch was initially limited to a select group of users. Responsible for detecting opportunities for improvement in the model and pushing it to the limit until we knew what it would be capable of. However, OpenIA had not yet given more information about when it was considering its widespread landing to all users. Yesterday, the doubt was definitively cleared up.

image from a sora video

2024, the year of Sora

Mira Murati, CTO of OpenIA, has taken advantage of the interview given to The Wall Street Journal to announce the roadmap that Sora will follow in the coming months. Murati announced that Sora is scheduled to arrive “definitely this year“, specifying, even more, “in a few months”. However, he has not ventured to give any possible exact date that can give us indications of how close we can be in this regard. However, everything seems to indicate that we will have to wait, at the very least, for the halfway point of this year to begin enjoying all its possibilities.

Murati also offers us more information about the performance we can expect from Sora. Indicating that, for the moment, the videos will not be able to incorporate audio tracks, so all the videos will have to be silent and then mount the audio separately. Editing it in case we are interested in this possibility.

However, the aspect that really surprises is that users, once Sora has created the final video, we can proceed to editing it through text commands. However, in this case it did not specify whether this possibility would come later, as the audio function is expected to do, or if we will have it available from the first moment.

Sora’s training

The interview also focused on one of the most controversial aspects that have surrounded AI language models since their creation: the data that has been used to perform training. However, in this case, Murati stated that in order to train Sora to achieve the current results, only videos that are currently available have been used. publicly available, as well as videos licensed by Shuttershock.

However, some issues that may be of great relevance to understand the final results, as if YouTube has been used to train the model, they were left unanswered. So, at least for now, we will have to continue waiting to learn a little more about Sora’s career until we reach the point where she is today.

Previous articleGoogle Calendar improves with the new integration of Tasks
Next articleThe Income Tax draft on your Android: everything you need to consult and file the 2023 declaration
Expert tech and gaming writer, blending computer science expertise