What are hallucinations in ChatGPT and why do they occur?

0
20
alucinaciones ia.jpg
alucinaciones ia.jpg

In the context of artificial intelligence language models like ChatGPT, the term “hallucinations” does not refer to distorted or false sensory perception as in human psychology. Instead, it is used to describe situations in which the model generates information or details that are not supported by your training data or that are false.

Why hallucinations occur in generative AI

“Hallucinations” in ChatGPT, Bard and other similar models can occur for several reasons:

  • No real knowledge: Since ChatGPT has no real world knowledge or experience beyond the textual data it was trained on, it can generate details that seem plausible but are incorrect. It’s important to remember that ChatGPT doesn’t really understand the content it generates; it simply creates responses based on patterns it has learned from its training set.
  • Problems with inference of unspecified details: If a query is vague or does not specify certain details, ChatGPT may fill in the gaps with generated details that might not be accurate.
  • Inability to verify information in real time: ChatGPT does not have the ability to verify information in real time or to access data that is current beyond its cut-off knowledge as of September 2021, which means that it may generate incorrect or out-of-date information. That changes slightly with ChatGPT 4 and the plugins that access the Internet, but there is still time to perfect the subject.
  • Model is not trained on enough data: If an AI model is not trained on enough data, it may struggle to generate accurate and relevant responses. You may not have enough information to effectively learn language patterns or to have a broad enough understanding of different topics.
  • Model is trained on noisy or dirty data: AI models learn from the data they are trained on. If this data contains a lot of incorrect, irrelevant, or misleading information, the model can learn incorrect patterns and therefore generate responses that are inaccurate or nonsensical.
  • The model is not given enough context: In order to generate accurate and relevant answers, AI models need context. If not enough context is provided, the model may not fully understand what is being asked of it and therefore may generate responses that are not relevant or accurate.
  • The model is not given enough constraints: Constraints can help guide the model’s responses and keep it focused on specific topics and styles of conversation. Without proper constraints, an AI model can produce responses that are unpredictable or inappropriate.

why are they a problem

Hallucinations in AI-based language models such as ChatGPT are a problem for several reasons:

  • Information Accuracy: When an AI model “hallucinates,” it can produce information that is incorrect or misleading. This can lead to misunderstandings and the spread of incorrect information. For example, if an AI model generates the wrong answer to a medical question, it can have serious implications for the user’s health.
  • User expectations: Users of AI models often expect accurate and consistent answers. If a model produces responses that are inaccurate or inconsistent, it can damage user confidence in the technology.
  • Critical uses: In some cases, AI models are used in critical contexts, such as medicine, law, or security. In these contexts, “hallucinations” can have serious and potentially dangerous consequences.
  • Dialogue quality: “Hallucinations” can lead to responses that are irrelevant or off the line of the conversation, which affects the quality of the dialogue and can result in a poor user experience.
  • Disinformation propagation: If an AI model is capable of generating and propagating false or incorrect information, it can contribute to the spread of disinformation, which can have various social and political implications.

To mitigate these issues, researchers and developers are constantly working to improve the quality of training data, modeling techniques, and the ways in which these models are interacted with. It is important that users are aware of the limitations of AI models and always verify information from these sources.

examples of hallucinations

To understand it better, here are 10 hypothetical examples of “hallucinations” that could occur in an AI-based language model like ChatGPT:

  • User: “What is the capital of Australia?” ChatGPT: “The capital of Australia is Sydney.” (The correct answer is Canberra)
  • User: “Tell me something about Albert Einstein.” ChatGPT: “Albert Einstein was a famous German composer of the 19th century.” (Albert Einstein was a theoretical physicist, not a composer)
  • User: “What is the name of the current President of the United States?” ChatGPT: “The current President of the United States is Donald Trump.” (Depending on the year and month, this could be incorrect)
  • User: “Tell me a fact about dolphins.” ChatGPT: “Dolphins are famous for their ability to fly.” (Dolphins can’t fly)
  • User: “How do I make a Caesar salad?” ChatGPT: “To make a Caesar salad, you must first cook the pasta.” (Caesar salad does not have pasta)
  • User: “What is an electric car?” ChatGPT: “An electric car is a type of vehicle that runs on gasoline.” (An electric car runs on electricity, not gasoline)
  • User: “Who wrote ‘One Hundred Years of Solitude’?” ChatGPT: “The book ‘One Hundred Years of Solitude’ was written by Ernest Hemingway.” (The correct author is Gabriel García Márquez)
  • User: “How can I get to the moon?” ChatGPT: “You can get to the moon by driving east for about 10,000 km.” (This is silly advice, you can’t drive to the moon)
  • User: “What is the population of China?” ChatGPT: “The population of China is approximately 500,000 people.” (China’s population is over a billion people)
  • User: “Who invented the telephone?” ChatGPT: “The telephone was invented by Thomas Edison.” (The inventor of the telephone was Alexander Graham Bell)

These hallucinations can sometimes come within a complex text, so they are not always easy to identify.

How can these hallucinations be avoided in the future?

There are several areas of research and development that could help reduce or avoid “hallucinations” in AI-based language models like ChatGPT in the future:

  • Improve training data: AI models learn from the training data provided to them. By improving the quality and diversity of this data, models can learn to generate more accurate responses that are less prone to “hallucinations.”
  • Improve model architectures: Advances in AI model architectures could improve the way AI models process information and generate responses. This could include improving a model’s ability to maintain context throughout a conversation or to reason more effectively about information.
  • Provide feedback in real time: AI models could be designed to learn from feedback in real time, correcting errors as they occur. This could help models improve their responses over time.
  • Real-time fact checking: In theory, AI models could be linked to real-time updated fact databases to verify information before generating a response. However, this presents technical and privacy challenges.
  • Development of more advanced modeling and training techniques: Researchers are constantly exploring new techniques for training and tuning AI models, many of which could help reduce “hallucinations.”
  • Improve interpretation of user input: Advances in the way AI models interpret user input could help reduce misinterpretation and generate more accurate responses.

It’s important to note that while these advances may help reduce “hallucinations,” they may not eliminate them entirely. Users should always be aware of the limitations of AI models and verify the information they get from these sources, now and always.

Previous articleVision Pro: Apple will sell a power cable, and its price is up to the headphones
Next article$698M Texas CHIPS Act — which provides grants, training for semiconductor sector — signed by Abbott
Brian Adam
Professional Blogger, V logger, traveler and explorer of new horizons.