The second version of Stable Difussion arrives with these new features


We are once again talking about Artificial Intelligence projects that allow the creation of images automatically, starting from scratch, just by indicating some instructions through texts, which has meant a revolution to the point that some popular creativity applications and services are receiving functions based on these projects

And among the most outstanding Artificial Intelligence projects in this emerging sector is Stable Diffusion, which from today has its version 2.0 with quite appreciable novelties.

Stability AI notes that robust text-to-image models trained with a completely new text encoder (OpenCLIP) have been included with this release, being behind its development LAION with the support of Stability AI, promising improvements in image quality compared to with the previous version.

And they indicate that:

Text-to-image models in this release can output images with default resolutions of 512×512 pixels and 768×768 pixels

In this version, one of the existing problems in this type of project has been addressed, and that is the generation of content that can be qualified for adults. In this regard, the LAION NSFW filter is included to eliminate this type of content.

This version also includes a new model called Upscaler Diffusion capable of upscaling images by a factor of 4, offering an example of an image generated at 128×128 resolution scaled to a higher resolution image (512×512).

From Stability AI they add:

Combined with our text-to-image models, Stable Diffusion 2.0 can now output images with resolutions of 2048×2048 or even higher.

But in addition, this version includes depth2img, its new model that takes into account the depth of an image generated by the current model to generate new images using both the text and the depth information, and whose example leads this article.

And they say that:

Depth-to-Image can offer all kinds of new creative applications, providing transformations that look radically different from the original but still retain the coherence and depth of that image.

And finally they point out that they have updated their text-guided paint diffusion model, which makes it easy and fast to change parts of an image with precision.

We will be able to test it directly in Dream Studio in a few weeks.

More information: Stability AI

Previous articleHBO Max app gives error messages when playing on Apple TV 4K
Next articleiPhone 15 Pro with A17 chip and more RAM, post office privatization and more | TC duty
Brian Adam
Professional Blogger, V logger, traveler and explorer of new horizons.