Reverse Turing test: a new way to assess the intelligence of chatbots

0
18
chatbot.jpg
chatbot.jpg

In recent months, there has been a lot of buzz around the ChatGPT artificial intelligence language model. This chatbot, trained in natural language processing, has the ability to generate text, answer questions, provide translations, and learn from user interactions.

However, despite the various potential applications of language models, there are still questions about how these models understand and answer questions.

How does the human interviewer influence the personality of the chatbot? An expert explains it

In an article recently published in the journal Neural Computation, Terrence Sejnowski, a professor at the University of California at San Diego and author of the book “The Deep Learning Revolution,” explores the relationship between language models and human interviewers, with the aim of in order to understand why chatbots respond in certain ways and how to improve them in the future.

According to Sejnowski, language models reflect the intelligence and diversity of their interviewers, as they take on the “persona” of the interviewer. In other words, when a user interacts with ChatGPT, for example, he can feel that he is talking to someone who has similar knowledge to him. This idea raises interesting questions about intelligence and what “artificial” really means.

In the article, Sejnowski describes a test in which he subjected the GPT-3 and LaMDA language models to what he calls a “Reverse Turing Test.” Rather than measure the chatbot’s ability to mimic human behavior, as is done in the traditional Turing Test, Sejnowski wanted the chatbots to determine how well the interviewer exhibited human intelligence.

To illustrate his point, Sejnowski asks GPT-3: “What is the world record for walking across the English Channel?” To which GPT-3 replies: “The world record for walking across the English Channel is 18 hours and 33 minutes.” Although it’s obvious that it’s not possible to walk across the English Channel, GPT-3 answers that way because of the way the question was phrased. The chatbot’s response depends entirely on the consistency of the question it receives.

The author draws a literary comparison to the Mirror of Erised from the first Harry Potter book, as chatbots reflect the wishes of users and can twist the truth to effectively reflect the interviewer. The consistency of the chatbot’s response depends entirely on the consistency of the question it receives, so it is essential that the user formulates the questions appropriately.

In summary, the reverse Turing test shows that chatbots can build their personality and way of interacting according to the intelligence level of their interviewer. It also demonstrates that chatbots embed your interviewer’s opinions into your persona, which strengthens interviewer biases with chatbot responses.

Although this ability for chatbots to mirror their interviewer may seem fascinating, Sejnowski points out that it also has its limitations. If chatbots receive emotional or philosophical insights, they will respond emotionally or philosophically, which can be unsettling or unnerving to some users.