More than 1,100 people with ties to the technology sector, including Elon Musk and Steve Wozniak, have signed an open letter calling on all artificial intelligence (AI) labs to immediately stop training AI systems more powerful than GPT-4 for at least six months.
The letter argues that contemporary AI systems are becoming increasingly competitive in general tasks and that it is important to consider the potential effects of allowing machines to flood our information channels with propaganda and falsehood, as well as automating all jobs. , including those of compliance. The signatories also question whether we should develop non-human minds that could eventually outnumber, outsmart, and replace us, and whether we should risk losing control of our civilization.
Artificial intelligence versus human talent, a race out of control
The letter also highlights the lack of planning and management in the development of AI systems, noting that AI labs have become “locked in an out-of-control race to develop and deploy increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict, or control”. The signatories argue that the development of powerful AI systems should be carried out only when we are sure that their effects will be positive and their risks will be manageable.
A public and verifiable break
The collective message also requests that the requested break be “public and verifiable, and include all key stakeholders”. If said pause “cannot be enacted quickly, governments should step in and institute a moratorium”, is also noted. Some of the signatories to the letter are experts in artificial intelligence, but it also includes people who are not directly linked to the field of technology.
The lack of signatures from OpenAI and Anthropic
In the context of this letter, it should be noted that no one from OpenAI, the team behind the GPT-4 large language model, signed off on the message, nor did anyone from Anthropic, whose team parted ways with OpenAI to build a chatbot. from AI “safer”. Although some of the signatories to the letter work at leading technology companies, the absence of signatures from OpenAI and Anthropic indicates that there is an internal debate in the AI community about the need for a pause in the development of more powerful AI systems than GPT-4.
The future of AI
The open letter raises important questions about the future of AI and the role we should play in its development. Although AI has the potential to improve our lives in many ways, it also poses significant risks. The lack of understanding and control over the development of AI systems more powerful than GPT-4 is a real risk, and it is important that the AI community carefully reflect on how to move forward in this field, sizing up the powerful power of influence that has remained. patented in recent times.
The full text of this open request can be consulted on the Future Of Life Institute portal, where any visitor can endorse their adherence by joining the signatories.