Is the approval of the European Union AI Law really good news?

is the approval of the european union ai law really.jpg
is the approval of the european union ai law really.jpg

artificial intelligence european union

Yesterday the European Parliament gave the green light to the approval of the ‘European Artificial Intelligence Regulation’, better known as the ‘AI Law’. These regulations include a series of requirements that companies in the sector must review if they want their AI to be able to operate within the member countries of the European Union. How does this affect the models currently available in Spain? Will we no longer have access to ChatGPT or other similar tools?

The AI ​​Law received 523 votes in favor, 46 against and 49 abstentions in Parliament. The regulation is one of the first to come forward on artificial intelligence and has done so in record time due to the urgency of legislating a technology that is gaining strength, especially after the launch of GPT-3 in November 2022.

European Parliament

Following the release of OpenAI’s large language model (LLM), more and more powerful AIs have arrived, such as Gemini, DALL-E 3, Midjourney or Llama-2, among others. Now, the firms of these systems will have to review them better, to check if they comply with European legislation. Otherwise, they will have to adapt them or stop being operational within the old continent.

AI must guarantee transparency

Of the rules included in the text approved by the European Parliament, highlights transparency in content generation by companies that develop AI. This forces brands to create tools that help users clearly distinguish AI-generated content from human-made content.

That is, ChatGPT and other services must have a watermark that informs when a text, image, audio or other file has been created by a machine. In this way, Europe aims to prevent people from copying and pasting content, guaranteeing transparency and preventing misinformation.

Copyright and AI

Another issue of concern since AI development began to grow by leaps and bounds is copyright protection. Last year, OpenAI was hit with a class-action lawsuit for allegedly failing to comply, and recently, the same thing has happened with NeMo, NVIDIA’s LLM.

The EU AI Law takes this into account and asks companies to respect copyright when training their models. This standard significantly limits the amount of data that AI developers can use to create more capable chatbots.

Artificial intelligence trained with books

High risk uses

The EU includes several types of risk in AI. There is the minimal risk, such as spam filters or video games with integrated AI; limited risk, with lack of transparency or chatbots; high risk, used in critical infrastructure, training, border control or essential services; and unacceptable risk, which are a threat to safety and rights and will not be permitted.

ChatGPT applications or other artificial intelligence tools in what the EU considers high risk areas will be subject to stricter regulations. With this, the legislation requires that firms subject their systems to very exhaustive risk assessments and ensure that they meet certain additional requirements to demonstrate that they do not harm people.

Those considered high risk uses are those that involve issues related to health, security, fundamental rights, the environment, democracy and the rule of law. This means that the most affected areas will be education and vocational training, employment, essential public and private services, such as health or banking, justice, migration processes and border management, etc.

Innovation in AI within Europe

Despite all the restrictions, the EU wants bet on innovation of artificial intelligence within its borders. That could pose certain challenges to foreign companies like OpenAI, as new competencies could emerge within the continent with public funding.

However, it could also be an opportunity for OpenAI if they collaborate on European projects in the sector. It should be noted that Sam Altman visited several EU countries last year, including Spain, and spoke of his commitment to future regulations.

The use of biometric recognition

In addition, biometric categorization systems based on sensitive characteristics and the non-selective extraction of facial images from the Internet or CCTV recordings are prohibited. It also does not allow emotion recognition in the workplace and at school or predictive policing using AI algorithms.

On the other hand, the use of biometric identification systems by security forces and bodies will only be allowed “in real time” and in very specific situations. Some of the cases in which the use of said technology will be considered are “the selective search for a missing person or the prevention of a terrorist attack,” according to the European Parliament.

Facial recognition artificial intelligence

Conclusions on the European AI Law

The new legislation approved by the European Parliament has a clear intention to ensure that AI complies with other laws and rights that already exist on the continent, such as data protection or copyright. In addition, it aims to avoid the risks that some experts warn could exist due to the rapid progress in AI through more exhaustive observations in specific cases.

This can be detrimental to firms with extensive experience in the sector, such as Google with Gemini or OpenAI with ChatGPT. If they do not meet the required requirements, they could have to disappear from Europe for a season. However, they will be able to return as long as they make the changes that are necessary to be within the European legislative framework.

Previous articleYouTube for TVs changes so you can watch videos better while reading comments and shopping
Next articleThe United States takes another step in its war against China: getting closer to banning TikTok throughout the country
saad javed
Saad is a tech enthusiast and gamer, blending expertise in computer science with a passion for cutting-edge technology and gaming.