The European Commission has published (28 September 2022) proposals for adapting civil litigation rules in European Union Member States – and in the European Economic Area – to reduce perceived difficulties in claiming non-contractual damages for harm caused by artificial intelligence (AI).

The proposal sits alongside wider reforms to the product liability regime. Both are closely intertwined with the EU’s proposed AI Act. The AI liability reforms are aimed at making it less burdensome for claimants to secure compensation, with the intention of promoting trust in this increasingly pervasive technology.

The black box and foreseeability

Claimants in civil law systems (typically without common law-style disclosure obligations) often have much less information than the defendant about the events that they believe have caused harm to them. This asymmetry of information may be exacerbated in relation to AI tools because of the “black box” problem.

“Machine learning” or “deep learning” systems are partly written by human developers and partly write themselves. Often, they continue to improve and adjust dynamically, based on new data that is passed through them. Exactly how or why a particular output is generated may be very difficult to understand or express because the settings in the system are not transparent and the reasons for the settings are also not necessarily traceable. Such “black box” systems can also generate completely unexpected outputs.

Consequently, it may be very difficult to unravel and explore why an event involving an AI system that caused harm has occurred.

A two-pronged solution

The proposed directive addresses fault-based liability rules. Separately, the EU’s product liability regime creates strict liability (not requiring proof of fault) and is also being reformed, particularly as regards digital products and services. Where AI is embedded in products or software that are subject to the product liability regime, claimants may have a choice of the legal basis on which to bring a claim for compensation. It will be important to monitor the development of both reform proposals together, as they are closely related. Both might substantially impact those who manufacture, distribute or use AI systems.

The new fault-based AI liability directive addresses the difficulties that claimants might otherwise face when attempting to:

  • gain access to technical and confidential information about the AI system in order to understand how it may have contributed to the harm suffered; or
  • prove the existence of a breach of a duty of care and a causal link with the damage suffered.

The Commission’s proposal aims to create a common approach across EU Member States, expressly permitting more favourable rules for claimants but setting a minimum standard

Access to information

A new right of access to information is proposed for claimants seeking compensation in relation to an AI system classified as “high risk” under the AI Act (which is the main focus for the proposed new regulatory regime). The claimant must first make “all proportionate attempts” to obtain information from the defendant. If that is unsuccessful and if the claimant can show they have a plausible claim for damages, they can ask a court to order disclosure of “relevant evidence” from the defendant.

“Relevant evidence” is not currently defined, but could feasibly include some or all of the extensive compliance and technical documentation required for high-risk systems under the AI Act. It could also potentially extend to data sets used for training, validating or testing AI systems and even the content of mandatory logs that AI providers must maintain for traceability purposes.

The court will limit the order to what is necessary and proportionate and must take into account the legitimate interest of all parties, in particular, trade secrets. It is worth noting that trade secrets are expressed as a factor for the court to consider, not a bar to disclosure. The court can order preservation of the evidence as well as disclosure.

Failure to comply with a disclosure order will trigger a rebuttable presumption of non-compliance with a duty of care by the defendant.

There is no doubt that for civil law systems where disclosure is not part of standard litigation procedure, this new right will be significant and helpful to claimants.

Reducing the evidential burden

To address both the increased evidential difficulties that a claimant might face and the potential difficulties around foreseeability of harm and causation, the directive also proposes creating a “presumption of causality”. “National courts shall presume … the causal link between the fault of the defendant and the output produced by the AI system” where the claimant shows that all of the following requirements are met:

  • The defendant is in breach of a duty of care under national or EU law. The scope of what might constitute a relevant duty of care for these purposes is not defined, although for providers and users of high-risk AI systems, provisions of the AI Act that would constitute relevant breaches for liability purposes are clearly listed. In addition, where the defendant has failed to comply with a disclosure order, the claimant will benefit from a rebuttable presumption that there is breach of a duty of care.
  • It is reasonably likely from the circumstances in which the harm occurred that the defendant’s fault (that is, the breach of a duty of care) has influenced the output or lack of output of the AI system in question. For example, the defendant’s failure to follow safety instructions on how to use the AI tool could be very relevant to a claim for physical injury, whereas a failure by the defendant to file certain documentation by a certain date probably would not.
  • The output or lack of output of the AI system gave rise to the harm suffered.

The proposal is not as simple as a reversal of the burden of proof: the claimant still needs to demonstrate these three elements.

The most important change is in relation to the second of the three steps. Instead of having to prove that there is a causal link between the defendant’s breach of its duty of care and the AI output or lack of an output that gave rise to the harm, it will be sufficient simply to show that such a link is reasonably likely. Once that lower threshold has been met, the presumption of causality is engaged. In its press release accompanying the directive, the Commission explains that this change should “address the difficulties experienced by victims in having to explain in detail how harm was caused by a specific fault or omission”.

The presumption of causality

In relation to high-risk AI systems, the defendant will be shielded from the presumption of causality where it can demonstrate that the claimant has access to sufficient evidence and expertise to prove the causal link. For AI systems that are not high risk, the presumption will apply only where the court considers that it would be excessively difficult for the claimant to prove the causal link. Finally, it will apply to AI systems that are put into use by a non-professional user only if they have materially interfered with the conditions of operation of the AI system or where they were required and able to determine the conditions of operation of the AI system but failed to do so.

The presumption of causality will be rebuttable, shifting the burden of proof onto the defendant to show that there is no causal link between the fault and the AI output; for example, by showing that the AI system could not have caused the harm in question. Even taking into account the disclosure provisions under the directive, the defendant may well be much better placed than the claimant to evaluate how a particular output was generated, so it may be easier for the defendant to disprove causation than it would be for the claimant to prove it.

The claimant will still need to prove the link between the output or lack of output and the harm suffered.

Strong interface with the AI Act

There is a very intentional interplay between the EU’s draft AI Act and the proposed new presumptions on liability, linking non-compliance with the EU’s planned regulatory regime with increased exposure to damages actions. This will enhance the role of private damages actions in driving compliance, alongside public regulatory enforcement, as has been seen in the field of competition law.

The potential for claimants to get hold of a defendant’s regulatory compliance documentation to inform their claims may add a tactical aspect to how those technical documents are written.

Interestingly, the proposal does not affect the due diligence obligations and related liability exemption for algorithmic decision-making systems (which may incorporate AI) that will apply to online intermediaries under the EU’s soon-to-be-adopted Digital Services Act (DSA). Hosting providers, platforms and other internet intermediaries will need to monitor how the interface with the new legislation between the DSA, AI Act and the AI liability directive develops.

What about the UK?

These reforms will not, of course, apply to the post-Brexit UK. As regards the new right for claimants to seek disclosure of evidence from the defendant, extensive disclosure requirements apply as standard to cases under English civil litigation procedure. AI Act compliance documentation and other evidence would be disclosable to a claimant as standard procedure under English law if it was relevant to the subject matter of the litigation.

By contrast, there is no current plan to create a presumption of causation for claimants in English courts similar to that proposed by the Commission. Generally speaking, the UK government is planning to take a light touch approach to regulating AI, devolving it to existing regulators using existing powers based on overarching principles and without new legislation. It will be interesting to track how litigation around AI applications develops in the UK without the boost envisaged in the directive.

Timeline for change

The proposed directive needs to be finalised and agreed between the Commission, the European Parliament and the EU Council. This process is likely to result in amendments and refinements to the original drafting and rarely takes less than 18 months, often more.

Once enacted at EU level, the principles in the directive will then need to be implemented at national level. This two-stage approach with national implementation of EU legislation was selected because of the extensive differences between the civil litigation rules of different EU Member States. It enables each jurisdiction to enact them as needed to fit into their national regimes. The proposal provides that Member States will have two years to implement the directive into national law, known as the transposition period.

As currently drafted, the new rules would only apply to harm that occurs after the transposition period, without retrospective effect. The Commission will review the success of these interventions once the new provisions have been in force for five years, reviewing in particular its decision not to create a strict liability regime for AI systems.

Source: osborneclark