We didn’t see this coming: this is how our way of thinking is changing due to the errors of Artificial Intelligence

0
4
we didnt see this coming this is how our way.jpg
we didnt see this coming this is how our way.jpg

As Artificial Intelligence models advance and their use becomes more popular, we are also more aware of some of the risks inherent to these advances. Now a scientific study shows that one of the risks is infect us with the prejudices that AI has and start using them in human reasoning.

The new study by the University of Deusto has made it clear which is one of the dangers of Artificial Intelligence that we had not seen coming until now.

Artificial Intelligence changes the way we think

New research carried out by psychologists Lucía Vicente and Helena Matute from the University of Deusto in Bilbao provides evidence that people can inherit biases from Artificial Intelligence (systematic errors in AI results) in their decisions.

The surprising results obtained by Artificial Intelligence systems, which can, for example, maintain a conversation like a human does, have given this technology a worrying image of high reliability. More and more professional fields are implementing AI-based tools to support specialists’ decision-making to minimize errors in their decisions.

However, this technology is not without risks due to biases in AI results. We must consider that the data used to train the models on which this technology is based reflects past human decisions. If this data hides patterns of systematic errors, the AI ​​algorithm will learn and reproduce these errors. In fact, there is evidence that indicates that AI systems inherit and amplify human biases.

The most relevant finding of Vicente and Matute’s research is that the opposite effect can also occur: that humans inherit biases from AI. That is, not only would AI inherit its biases from human data, but people could also inherit those biases from AI, risking getting caught in a dangerous loop. Scientific Reports publishes the results of Vicente and Matute’s research.

An infinite loop of prejudices and biases

In the series of three experiments carried out by these researchers, the volunteers carried out a medical diagnosis task. One group of participants was assisted by a biased AI system (showing a systematic error) during this task, while the control group was not assisted. The Artificial Intelligence, the medical diagnosis task and the disease were fictitious. The entire scenario was a simulation to avoid interference with real situations.

Participants assisted by biased AI system They made the same kind of mistakes as AI, while the control group did not make these errors. Therefore, the AI ​​recommendations influenced the participants’ decisions. However, the most significant finding of the research was that, after the interaction with the AI ​​system, these volunteers continued to imitate its systematic error when they went on to perform the diagnostic task without assistance.

Artificial intelligence

In other words, participants who were first assisted by biased AI replicated their bias in a context without this support, thus showing a inherited bias. This effect was not observed in the control group participants, who performed the task without assistance from the beginning.

These results show that biased information from an Artificial Intelligence model can have a lasting negative impact on human decisions. The finding of a heritability of the AI ​​bias effect points to the need for more psychological and multidisciplinary research on the interaction between AI and humans. Furthermore, evidence-based regulation is also needed to ensure fair and ethical AI, considering not only the technical characteristics of AI but also the psychological aspects of AI and human collaboration.

Previous articleThe European Union puts AI at the same level of danger as weapons
Next articleWhat’s new in ChatGPT: he can finally talk and listen to you
Abraham
Expert tech and gaming writer, blending computer science expertise