Content generation through artificial intelligence (AI) it has struck many for its high level of detail and accuracy. However, in the case of the images there are still three issues to be resolved to make them more real: the hands, the feet and the teeth.
These parts of the body usually have many inaccuracies in the photographs created by these tools and are what allow us to distinguish an image designed by the AI from one with real people.
A detail that was essential to distinguish an attempt to disseminate false information in France in which they published erroneous content, generated with artificial intelligence, about some protests and that they tried to pass off as real, but thanks to those errors, such as the hands with six fingers, users learned the truth.
A case that generates an alert, before a practice that can be repeated, using these new tools to manipulate news or facts. Though for now, the AI is still failing to create hands and teeth.
Hands and teeth often have errors due to inaccuracies in the data obtained by these tools.
Why artificial intelligence has bugs on its hands
To understand the origin of these failures, it is essential to understand that all AIs work under the machine learning model, thanks to the data they take from different databases and content they take from the Internet.
So the designs generated are not products of the imagination, but strictly depend on the information available to them. For example, the basis of ChatGPT it goes until 2021, what was asked after that date you won’t know for now.
It may interest you: Google integrates its artificial intelligence into Maps, the translator and the search engine
In the case of tools that create images, such as Stable Diffusion, DALL-E 2 and Midjourney, by asking it to design a dog in a certain way, it will search its dataset for the dog models it has and with automatic training it will give the final result.
At the specific point of the hands and feet something special happens. “It is generally understood that, within AI data sets, human images show hands less conspicuously than faces. The hands also tend to be much smaller in the source images, as they are rarely seen in large form,” a spokesperson for Stability AI to BuzzFeed News.
This means that the information that the AI has is not clear enough and therefore, Amelia Winger Bearskinartist and associate professor of AI and the arts at the University of Floridaconsiders that this technology does not really understand what a hand is or at least not in the way that it is connected anatomically to a human body.
Hands and teeth often have errors due to inaccuracies in the data obtained by these tools.
It may interest you: ChatGPT recommends love letters and ideas for Valentine’s Day
In the information with which these technologies are fed, the hands are always grasping something or placed on something, so they are not seen completely, but only the fingers or a part of them. The AI feels the need to do the same in your content and hence the messy results.
The solution for this would be for this technology to take images in which the extremities are free and open as a reference, so that it has enough information to replicate them, understanding how they connect with the body and its entire composition, however simple it may seem to us.
A point that, if resolved, will make it more difficult for us to distinguish between a real image and one created by artificial intelligence, with all the good and bad that this brings.