Artificial Intelligence has expanded to collaborate in all possible fields, from the simplest to the most complex. In other words, we have seen this technology allow us to erase objects from a photo and also detect cancerous tumors in the health area. However, in fields such as medicine there is an important factor and that is when to trust the suggestions of a machine. In this sense, MIT has begun to work on a model to help people fine-tune their criteria for confidence in AI.
This work is based on the Mental Models that we human beings use to trust the criteria of other people and although it is necessary to deepen in the area, it is an excellent beginning.
Refining the criteria of trust that people have about AI
We can explain the Mental Models on which the MIT researchers base their work through an example. We can think of a veterinarian examining a cat and trying to find its health problem. The professional analyzes all the studies carried out on the animal and if he still has doubts, he will surely turn to a colleague. Our vet has a mental model built around his colleague’s biases and strengths, based on what he knows about him and past experiences.
In other words, the veterinarian we are talking about has a built-in criterion about when and on what issues to trust his colleague. In that sense, MIT has proposed to replicate this process, but between humans and Artificial Intelligence. In this way, they seek to fine-tune and improve people’s criteria of trust in AI.
In their tests, the researchers took a population of 3 groups where one received the teaching model designed by MIT for humans to build mental models about AI. Thus, this group received a text on a given topic and a question whose answer was in the written passage. People had to answer the question or allow the AI to do so. Later, they received information about why the AI chose their answer, it is how it works. Additionally, they got a couple of follow-up examples to reinforce the idea.
Of the remaining groups, one did not receive the follow-up examples and another did not receive any training. The results showed that 64% of those who were trained are much clearer about the possibilities of AI collaboration. However, the other two groups are not too far behind this number, throwing in 54% and 50%. In other words, mental models are being used, however, it is necessary to go deeper into the MIT design to fine-tune teaching much more.