Qualcomm shows Stable Diffusion AI running on Android

0
21
qualcomm shows stable diffusion ai running on android | video
qualcomm shows stable diffusion ai running on android | video

Qualcomm has released a video demo of the famous image processor based on artificial intelligence, Stable Diffusion which we see for the first time running on an Android smartphone powered by a Snapdragon 8 Gen 2 SoC. The company says it’s actually the first demonstration of its kind on hardware designed for the mobile segment.

Stable Diffusion is a machine learning model that has become very popular and falls into the category of generative artificial intelligences. He’s able to create photorealistic images from any text input and manages to do it with times of the order of a few tens of seconds.

Having to process over 1 billion parameters, Stable Diffusion was initially mainly limited to running in the cloud, but this demo shows that the solution can also be run on an Android smartphone. Thus we discover that Qualcomm AI Research performed full-stack AI optimizations using Qualcomm AI Stack to implement it on this device.

The company explains that on-device AI offers many benefits including reliability, latency, privacy, efficient use of network bandwidth, and overall cost. Here’s what Qualcomm has to say about it.

Running Stable Diffusion on a smartphone is just the beginning. All of the research and full-stack optimization that made this possible will feed into Qualcomm’s AI stack. Our unique technology roadmap allows us to scale and use a single AI stack that works not only on different end devices, but also on different models. This means that the optimizations for Stable Diffusion to work efficiently on phones can also be used for other platforms such as laptops, XR headsets and virtually any other device powered by Qualcomm Technologies. Running all AI processing in the cloud will be too expensive, which is why efficient edge AI processing is so important.

Edge AI processing ensures user privacy while running Stable Diffusion (and other Generative AI models) as the input text and generated image never have to leave the device – this is a big deal for the adoption of both consumer and enterprise applications. The new AI stack optimizations also mean that time to market for the next base model we want to run at the edge will also decrease. This is how we scale across baseline devices and models to make edge AI truly ubiquitous.

Previous articleMeta invites anyone to use their artificial intelligence to do research, how to access
Next articleCommands to Add Dates and Timer in Google Docs and Spreadsheets
Abraham
Expert tech and gaming writer, blending computer science expertise