The assignment of personalities in ChatGPT and its toxicity

0
10
robot malo.jpg
robot malo.jpg

ChatGPT is a language model that allows the generation of text through artificial intelligence. Recently, a study from the Allen Institute for Artificial Intelligence has shown that assigning people to this model can increase their toxicity by up to six times.

This finding is important, as ChatGPT is a widely used technology in chatbots and plugins from companies like Snap, Instacart, and Shopify, which could expose users to toxic content.

Assignment of people and its impact on ChatGPT

People mapping refers to the creation of virtual characters for the ChatGPT language model to simulate their behavior when generating text. This is a crucial part of creating chatbots, since you want to attract a specific audience with certain types of behavior and capabilities.

For example, you can assign a personality such as a politician, celebrity, businessman, scientist, etc., so that the model generates text that appears to have been written by those people. This is commonly used in the creation of chatbots, so that they can simulate a more realistic and engaging conversation for users. People assignment can also be used to restrict certain words or behaviors that are not appropriate for certain types of users, such as children or religious people.

However, the Allen Institute study shows that this practice can be detrimental to users, as ChatGPT has inherent toxicity that increases with the number of people assigned. The study examined nearly 100 people from diverse backgrounds and found that journalists are twice as toxic as businessmenFor example.

Training bias and people assignment bias

Training bias is believed to be the main source of bias in ChatGPT, but the Allen Institute study has shown that people assignment can also lead to bias. That is, the model can develop an “opinion” about the assignees themselves and the issues in which they are associated.

The research also showed that different subjects generate different levels of toxicity. This suggests that ChatGPT is capable of simulating the behavior of diverse personalities and may create toxic language that reflects underlying biases in its training and people assignment.

The role of technology in social responsibility

Research from the Allen Institute demonstrates the importance of social responsibility in the development of artificial intelligence technologies. ChatGPT is a powerful tool that can simulate human behavior effectively, but its ability to generate toxic text is a risk for users.

It is important for AI technology developers to consider biases in training and people allocation when creating their models.

Users of these technologies must be aware of the potential risks and be prepared to deal with them.