Sam Altman, the CEO of OpenAI, acknowledges that the GPT language model is biased, although it was significantly improved in the GPT-3.5 and GPT-4 versions.
In an interview with Lex Fridman, an artificial intelligence researcher at MIT, Altman spoke about a variety of topics, including Elon Musk’s criticism of OpenAI’s AGI security research, the likelihood of AI risks, and the topic of jailbreaking. » to which your systems may be subjected.
OpenAI CEO accepts criticism and acknowledges that there is much to be done in AI
Altman admitted that GPT is biased and probably always will be. Despite this, he expressed appreciation to critics who note the advances made by OpenAI, noting that much remains to be done.
The point man behind ChatGPT also responded to Elon Musk’s criticism of OpenAI’s AGI security research. Altman was sympathetic to Musk’s concerns, but felt that Musk should focus more on the difficult job of addressing AI security issues.
The OpenAI CEO emphasized the distinction between artificial general intelligence (AGI) and artificial intelligence (AI). AGI is a machine that can understand or learn any intellectual task that a human can do while AI is a machine that excels at a specific task.
Altman also spoke about the likelihood of AI risks. He acknowledged that many of the predictions about AI in terms of capabilities and security issues turned out to be inaccurate. Related to OpenAI’s goal of giving people control over models while having open boundaries, he also talked about jailbreaking.
“Jailbreaking” is the name given to the process of modifying a device to allow the execution of orders not authorized by the manufacturer. In the context of AI, when we talk about jailbreaking, it refers to modifying an AI model to act in an unwanted way or to bypass the limits that have been imposed on it.
The top executive of the artificial intelligence firm explained that the existence of jailbreak shows that the problem of users having a lot of control and models behaving as they want within wide limits has not yet been solved. He noted that they want users to be in control, but they don’t want them to have to jailbreak. Altman noted that the sooner the problem is resolved, the less need to jailbreak.
In addition to AI-related topics, more general topics such as the meaning of life, the difference between fact and fiction, and advice for parents were also discussed in the interview. Altman emphasized the value of intellectual honesty, accepting that you can make mistakes and grow from them.