FRIDA (Framework and Robotics Initiative for Developing Arts) is a research project that is taking place at the Robotics Institute of Carnegie Mellon University.
The goal is to mix human and robotic creativity using AI models to generate paintings based on text descriptions and images, with some human input required.
On his Twitter account They are showing the impressive results, where you can analyze the collaboration between humans and machines, and see the inaccuracies and errors.
Each painting takes several hours to complete, it’s true, but many of them I could easily put in my living room.
We will see it this year in London at the International Conference on Robotics and Automationin the ICRA 2023where FRIDA has been accepted to show her ability to the rest of the world.
Capturing the dynamic nature of the painting is a challenge for a robot. Most of the existing work on robot painting models the painting process as analog print, that is, an input image is the same as its final target when reproduced in the paint. With FRIDA it is not like that, it is not made so that the result is equal to the input.
FRIDA uses two axioms of the artistic process:
– Art has high-level semantic objectives.
– Art is a dynamic process that needs to constantly adapt and reconsider its objectives during the creation process.
To achieve high-level semantic goals, they design loss functions to compare the semantic goals (or user input) and the current canvas. Instead of calculating the loss using the pixel values, they use the feature values ​​of the pre-trained deep neural networks. They create a simulated, differentiable painting environment that allows your planner to optimize brush actions directly toward semantic goals. The simulation environment is created by modeling the actual brush strokes generated by the robot.
You have the PDF document with all the description of FRIDA in this link