Regulation of ChatGPT and other AI in the European Union

0
20
robot europa.jpg
robot europa.jpg

Have you ever wondered how the rules for the use of artificial intelligence are being set in Europe? Let’s see the details.

Artificial intelligence, or AI, is a technology that has captured the attention of both the general public and businesses in recent months. Its applications, such as ChatGPT and Midjourney, have aroused curiosity and fascination. From writing essays and poems in a matter of seconds to generating stunning images, AI has shown its potential for advancement in various fields.

However, this technology also presents challenges and risks. On the one hand, AI can save lives by improving medical evaluations, for example. But, on the other hand, there is a danger that it will be used inappropriately or maliciously, such as by authoritarian regimes seeking to perfect mass surveillance or spread false information. That is why the European Union has decided to take action on the matter and establish regulations that make it possible to take advantage of the benefits of AI, while protecting the fundamental rights and safety of people.

Setting Constraints for a Proper Balance

The European Union has launched a process to create a law that prevents abuses in the use of artificial intelligence. The objective is to find a balance between innovation and the protection of people. For the last two years, proposals have been discussed and positions negotiated between the Member States of the EU and the European Commission.

The central idea of ​​the Commission proposal is to have a list of activities considered “high risk” for AI. This would cover systems used in sensitive areas such as critical infrastructure, education, human resources, public order and migration management. For these cases, human control over the AI, technical documentation and a risk management system would be required. Each EU Member State would have a supervisory authority to ensure compliance with these rules.

However, members of the European Parliament have different opinions on the criteria for defining which AI applications should be considered “high risk”. Some propose a more restricted definition, limited to those applications that represent a threat to security, health or fundamental rights. Others, like the group of Los Verdes, oppose this limitation and seek a broader approach.

ChatGPT Regulation

In the case of generative artificial intelligence, such as ChatGPT, specific obligations similar to those applied in the “high risk” category are being considered. Lawmakers also want AI companies to put in place protections against illegal content and copyrighted works that can be used to train their algorithms. The Commission proposal already establishes the obligation to notify users when they are interacting with an AI machine, as well as the indication that the image generation has been artificially created.

Importantly, outright bans on AI apps would be rare and would only apply to those that go against Europe’s core values, such as mass surveillance and citizen rating systems. Lawmakers are also looking to add prohibitions so AI can’t recognize emotions and to remove exceptions that allow remote biometric identification of people in public places by law enforcement. It is also intended to prevent the use of photos taken from the internet to train algorithms without obtaining the authorization of the people involved.

Protection, innovation and ethics in artificial intelligence

The regulation of artificial intelligence in the European Union is an important step towards the protection of the rights and security of people in an increasingly digitized world. While AI offers great potential for advancement in various fields, it also poses ethical challenges and risks to society. It is crucial to find the right balance to reap the benefits of AI without compromising fundamental values ​​and rights.

The EU’s approach is focused on setting restrictions for high-risk applications and promoting the responsibility of AI manufacturers. At the same time, it seeks to promote innovation and protect copyright and the integrity of the information. The objective is to create a solid legal framework that guarantees transparency, accountability and respect for European values.

Ultimately, the regulation of artificial intelligence is an ongoing and constantly evolving challenge. As technology advances and new applications emerge, it is critical to review and adapt regulations to maintain a balance between protecting individual rights and promoting innovation. The European Union is taking the lead in this process and is striving to become a global benchmark in AI regulation.

More information at europarl.europa.eu

Previous articleHave celebs learned their lesson from the FTX debacle?
Next articleGoogle I / O 2023: these are the most important announcements beyond mobile
Brian Adam
Professional Blogger, V logger, traveler and explorer of new horizons.