Health Law Blog Sweden

ISSN: 2004-8955

AI in Healthcare and the Liability Vacuum in EU Law

AI-generated image created using Microsoft Copilot, 2025

Petra Holmberg*

A Technological Crossroads

Artificial intelligence (AI) potentially poses multiple threats, and the concerns have become more visible in recent years.[1] In 2023, the Future of Life Institute issued an open letter urging a pause in AI development.[2] In 2024, the Center for AI Safety published a statement equating AI-related risks with those of pandemics and nuclear war.[3] These calls for caution gained traction largely due to the support of prominent AI researchers and industry leaders.

Amid these warnings, AI has been introduced to critical sectors, in particular, healthcare. This raises an essential question: Is the deployment of AI in healthcare a ticking time bomb or a gateway to revolutionary medical advancement? The answer is partly in how effectively legal frameworks can ensure safety, liability, and trust in AI systems.

The EU’s Regulatory Response

Recognising the risks and opportunities posed by AI, the European Commission proposed, and later adopted, the world’s first comprehensive legal framework for AI: the Artificial Intelligence Act (AI Act).[4] The Commission justified this legislative move by stressing the need for “trustworthy AI” that upholds safety, health, fundamental rights, and democratic values.[5] Although existing legislation provided some protection, it was not sufficient to address the specific challenges that AI systems can pose.[6]

Under the AI Act, systems that could significantly impact individuals’ health and safety, such as AI-powered medical devices, are classified as “high-risk.” These high-risk systems must meet the strictest safety and transparency standards, ensuring that their use aligns with the values enshrined in the EU Charter of Fundamental Rights.[7] Although the AI Act introduces strong preventive measures, questions remain about how effectively it addresses liability when AI systems malfunction.

Trust is paramount in healthcare, where errors can have life-altering consequences. Accordingly, the concept of “trustworthy AI” has been framed around safety, particularly patient safety, and legal responsibility for harm caused.[8] Given the sensitivity of AI use in healthcare to improve patient health, safety guarantees for patients are essential.

Challenge of Liability Guarantees

The European approach to liability in AI-powered medical devices is complex. It integrates traditional product liability principles based on the Directive on liability for defective products with new considerations brought by AI’s unpredictability and opacity.[9] The established model of European product liability law strives for a balanced allocation of risks between manufacturers and users.[10] However, AI challenges this equilibrium.

Medical AI systems, particularly those based on deep learning, often operate as “black boxes” to various degrees. Their decision-making processes are not totally transparent, even to their developers. As a result, the traditional concept of “defect,” typically applied to physical flaws in products, becomes difficult to define in an algorithmic context. This lack of transparency complicates efforts to establish a standard for defectiveness or to assign fault in the event of harm.[11] This situation has sparked a debate over the adequacy of existing liability frameworks and the need for a new legal paradigm. The current benchmarks for liability do not reflect AI’s evolving behaviour.[12] A reimagined liability regime is, therefore, essential to closing the gaps that AI technologies have opened.

The AI Act reflects an awareness that different AI systems pose various levels of risk. AI-powered medical devices, which may directly influence diagnoses or treatment decisions, are considered high-risk due to their potential to infringe on patients’ health and safety.[13] Importantly, safety and liability are regulated by distinct legal mechanisms. While the AI Act imposes rigorous safety standards for high-risk AI systems, it cannot completely eliminate the risk of harm. As such, a clear liability framework is needed to complement preventive regulation.[14]

Yet, enforcing liability for AI-caused harm is anything but straightforward. The black box problem is a lack of transparency in many AI algorithms. Many deep machine learning algorithms and other advanced AI algorithms are inherently non-transparent technologies.[15] The black box nature of many AI systems makes it difficult, sometimes even impossible, for patients to prove the causality link. Patients are burdened with proving not only that they were harmed but also that the AI system was defective and directly caused the harm, an often impossible task.[16]

The Withdrawn Directive – A Missed Opportunity

To address this problem, the European Commission proposed the Artificial Intelligence Liability Directive.[17] It aimed to facilitate compensation claims by introducing a presumption of a causal link in specific situations involving high-risk AI systems. Under Article 4 of the proposed directive, a presumption of causality would arise when:

  1. The manufacturer or a person for whose behaviour is responsible failed to comply with a duty of care.
  2. It was reasonable to assume that an error in the AI system contributed to the output (or lack thereof).
  3. The patient could demonstrate that the AI system’s output (or failure) caused harm.[18]

Had it been enacted, the directive would have represented a landmark in AI liability for patients harmed by high-risk AI systems.

However, surprisingly, the European Commission withdrew the proposal from its 2025 work program. The rationale was a lack of foreseeable agreement.[19] This explanation was met with scepticism because it was made even before the rapporteur’s report was published. The European Parliament’s rapporteur, Axel Voss criticised the decision, stating: “Big Tech firms are terrified of a legal landscape where they could be held accountable… Instead of standing up to them, the Commission has caved.”[20]

The withdrawal of the directive does not mean patients are without protection. If harm arises from medical malpractice, national laws still apply.[21] However, when the AI system itself is defective, patients find themselves in a legal grey zone. The AI Act and existing product liability rules only offer limited recourse. The revised Directive on liability for defective products from 2024 includes provisions specific to software-based products, acknowledging the complexities of AI. Notably, Article 9 introduces a presumption of defectiveness when proving causality is difficult and the harm likely stems from a product defect. While promising on paper, this provision lacks detailed requirements. It delegates the final decision to national courts, which must determine whether an AI system is technically or scientifically complex enough to apply the presumption. This opens the door to inconsistent outcomes across the EU, where one Member State may find an AI system too complex while another does not. Manufacturers may then choose to market their products only in Member States with lower liability exposure, undermining the EU’s goal of a harmonised internal digital market. This also clearly contradicts the promise made by Ursula von der Leyen at the AI Action Summit that the AI Act would provide businesses with clearer regulatory requirements for users, but primarily for manufacturers.[22]

Conclusion

The decision to withdraw the AI Liability Directive marks a significant setback in Europe’s efforts to regulate artificial intelligence. It weakens the AI Act’s core ambition – to foster trustworthy AI – and undermines the EU’s pledge to ensure high safety standards and liability.

Patients are left vulnerable without a unified legal mechanism to address liability for harm caused by high-risk AI systems. The question of liability is left to national courts without defining the precise criteria to be applied in such cases, which creates an additional administrative burden. This move clearly contradicts the AI Act’s fundamental purpose and weakens the newly adopted legislation. In my opinion, the classification of high-risk AI loses one of the fundamental purposes of the regulation, namely, to guarantee enforceable liability on AI manufacturers. If EU leaders are serious about becoming a global leader in ethical AI, they must revisit the question of liability. Trustworthy AI cannot exist without a transparent liability mechanism. Hopefully, future legislative efforts will address this void and restore confidence for both patients and manufacturers.


* Petra Holmberg is a postdoctoral researcher at the Department of Law, Lund University.

[1] Cerf M. and Waytz A. (2023). If you worry about humanity, you should be more scared of humans than of AI. Bulletin of the Atomic Scientists79(5), 289–292.

[2] Future of Life. Pause Giant AI Experiments: An Open Letter. (22 March 2023). Retrieved (30 April 2025): Pause Giant AI Experiments: An Open Letter – Future of Life Institute

[3] Center for AI Safety. Statement on AI Risk. (2024). Retrieved (30 April 2025): Statement on AI Risk | CAIS

[4] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828. OJ L, 2024/1689.

[5] Article 1(1) AI Act.

[6] European Commission. Shaping Europe´s digital future – AI Act. Retrieved (30 April 2025): AI Act | Shaping Europe’s digital future

[7] European Union. “Charter of Fundamental Rights of the European Union.” Official Journal of the European Union C83, vol. 53, European Union, 2010, p. 380.

[8] World Health Organization. (2021). Ethics and governance of artificial intelligence for health: WHO guidance Executive summary. ISBN 978-92-4-003740-3.

[9] Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products and repealing Council Directive 85/374/EEC. OJ L, 2024/2853.

[10] Haftenberger A. and Dierks C. (2023). Legal integration of artificial intelligence into internal medicine: Data protection, regulatory, reimbursement and liability questions. Med (Heidelb), 64(11),1044-1050.

[11] Schneeberger D., Stöger, K. and Holzinger, A. (2020). The European Legal Framework for Medical AI. In: Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2020. Lecture Notes in Computer Science, vol 12279. Springer.

[12] Duffourc MN. and Gerke S. (2023). The proposed EU Directives for AI liability leave worrying gaps likely to impact medical AI. NPJ Digit Med, 6(1):77.

[13] Article 6 AI Act.

[14] Shavell S. (1984). Liability for harm versus regulation of safety. Journal of Legal Studies. 13(2), 209-414.

[15] Statens Medicinsk-Etiska Råd. Kort om Artificiell intelligens i hälso- och sjukvården. (2022). Retrieved (30 April 2025): smer-2020-2-kort-om-artificiell-intelligens-i-halso-och-sjukvarden.pdf

[16] Article 10 Directive on liability for defective products.

[17] Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). COM/2022/496 final.

[18] Article 4(1) AI Liability Directive.

[19] Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Commission work programme 2025 Moving forward together: A Bolder, Simpler, Faster Union. COM/2025/45 final.

[20] IAPP. European Commission withdraws AI Liability Directive from consideration. (12 February 2025). Retrieved (30 April 2025): European Commission withdraws AI Liability Directive from consideration | IAPP

[21] Article 168(7) TFEU.

[22] IAPP. European Commission withdraws AI Liability Directive from consideration. (12 February 2025). Retrieved (30 April 2025): European Commission withdraws AI Liability Directive from consideration | IAPP

This entry was posted in

Posts Swedish Health Law