EU guidelines: Easier compensation for faulty AI or software

0
5
eu guidelines easier compensation for faulty ai or software.jpg
eu guidelines easier compensation for faulty ai or software.jpg

Victims of damage related to AI should be able to get compensation more easily. The EU Commission wants to generally adapt product liability to the digital age.

EU citizens should receive simple compensation if they fall victim to a faulty artificial intelligence (AI) system. This is provided for in a draft directive on AI liability that the EU Commission presented and published on Wednesday. Job applicants who were discriminated against in a recruitment process for which the employer relied on AI technology could benefit from this.

In addition, the Brussels government institution wants to adapt the 40-year-old product liability directive to the digital age, the circular economy and the effects of global value chains. In the future, there should also be a clear right to compensation for defects and defects that products such as robots, drones and smart home systems suffer from faulty software updates, AI or digital services.

According to the plan, the prerequisite for this is that these components are required for the operation of the respective article. Manufacturers should also be held liable if they do not fix vulnerabilities in the area of ​​cyber security.

The Commission considers a separate directive on AI liability to be necessary in order to establish uniform rules for access to information and to ease the burden of proof in connection with damage caused by AI systems. She wants to introduce more comprehensive protections for victims, by which she means individuals and companies. At the same time, the AI ​​sector is to be strengthened through legal certainty and guarantees.

SEE ALSO  The keys that you have to take into account before taking a mobile phone to be repaired

Anyone who is affected by errors in an AI system can currently only sue for damages with difficulty. For example, victims must prove discriminatory behavior and establish a connection to the damage suffered. For example, a bank’s scoring system could incorrectly identify a customer as not creditworthy. It is possible that an employee used the program incorrectly. Establishing the connection between such an action and the harmful result for the consumer is considered to be particularly difficult with AI because the technology is complex and the algorithms used are often non-transparent and difficult to explain.

The specific law aims to harmonize certain rules for claims that do not fall within the scope of the Product Liability Directive for damage caused by misconduct. This includes, for example, violations of privacy and data protection or defects caused by security problems.

The directive also aims to simplify the legal process for victims when it comes to proving that a person’s fault has caused harm. The Commission wants to introduce a “presumption of causality” for this. Victims would therefore no longer have to explain in detail how the disadvantage was caused by a specific fault or a specific omission. The prerequisite is that “a causal connection with the AI ​​performance can be assumed with reasonable discretion”.

Furthermore, those affected should be given more tools to claim compensation under civil law if high-risk AI is used in parallel. According to the draft, they are entitled to access evidence owned by companies and providers. It is about the release of data that developers used to train algorithms, user logs or information on quality management. Business secrets should remain protected.

SEE ALSO  I have purchased a new SSD, how do I prepare it for use in Windows?

Victims may have to assert this extended right to information in court. If they still do not succeed in getting the information they are looking for, this would be interpreted in the damages proceedings against the defendant institution: In this case, the burden of proof would be reversed. As examples where the legal toolbox should come into play, the Commission cites damage caused when an operator of parcel drones does not comply with the instructions for use or a provider does not comply with the requirements when using AI-supported employment services.

The Brussels executive authority speaks here of a “relief of the burden of proof”. However, she does not propose “a reversal of the burden of proof per se” in order to avoid “providers, operators and users of AI systems being exposed to higher liability risks”. In addition, the proposal will help “to strengthen public confidence in AI technologies and to promote the introduction and spread of artificial intelligence throughout the EU”.

With the draft amendment to the Product Liability Directive, the Commission intends to generally modernize the existing system of no-fault liability at EU level. As far as easing the burden of proof is concerned, the two Directives intend to introduce similar tools and use comparable wording to ensure consistency regardless of the redress route chosen.

The IT association Bitkom welcomed the fact that the commission wanted to settle “first fundamental questions about liability in the event of damage when using AI”. In order to advance the use of the technology in Germany, “legal clarity must be created as quickly as possible for practical use”. It is positive that AI systems are not generally classified as a source of danger with particularly high risks to life and health and that there is no no-fault liability for manufacturers or operators. The partial reversal of the burden of proof was viewed more critically. It remains open with which “other plausible explanations” the presumption of guilt could be refuted.

SEE ALSO  WhatsApp will stop working from February 29: check if you are on the list


(mho)