Artificial Intelligence to create scientific articles, a problem that needs a solution

0
764
chatgpt ciencia.jpg
chatgpt ciencia.jpg

A study published in the prestigious scientific journal Nature warns about the dangers of using OpenAI’s ChatGPT text generation program, which can propagate numerous errors about scientific studies. The authors of the study, led by Eva AM van Dis of the University of Amsterdam, suggest that open source alternatives whose performance can be scrutinized need to be developed, to counter the lack of transparency and the proliferation of misinformation.

The study notes that most cutting-edge conversational AI technologies are proprietary products of a small number of large tech companies, making it difficult to be transparent and verify how programs work.

The study authors used ChatGPT to complete a series of questions and tasks related to the psychiatry literature, and found that the program often generated false and misleading texts, which could lead researchers to incorporate biased or incorrect information into their reports. jobs.

Instead of eliminating large language models, the study authors suggest managing the risks associated with their use. To do this, they propose keeping humans in the decision-making process and promoting the adoption of explicit policies that require transparency in the use of artificial intelligence in the preparation of materials that may become part of the published record. However, the study authors also argue that the proliferation of large proprietary language models and the lack of access to the source code and underlying data is a danger, so a significant effort from entities outside of the private sector is needed to drive independent non-profit projects that develop advanced, transparent and democratically controlled artificial intelligence technologies.

SEE ALSO  New Zenfone 11 Ultra announced: Asus makes the leap to large screens to compete against the iPhone

The article, commented on zdnet, does not address the question of whether an open source model will be able to address the well-known problem of the “black box” of artificial intelligence, which refers to the opacity of the operation of deep learning models. with numerous layers of adjustable parameters or weights.

The problem is that many people use ChatGPT and the like as if it were an information search engine, and at the moment it isn’t. It is great for word processing, to save work in some sectors, and to perform the dozens of tasks that we disclose in @chatgpt_espbut from there to trusting the information it produces, there is a big leap.