Health Law Blog Sweden

ISSN: 2004-8955

Grid View

Generationsperspektiv i AI‑hälsorätten

Det här är en AI-genererad bild. Bilden har tagits fram i AI-verktyget ChatGPT
Det här är en AI-genererad bild. Bilden har tagits fram i AI-verktyget ChatGPT

Titti Mattsson och Noah Löfqvist*

AI driver fram en ny epok inom offentlig sektor och hälso- och sjukvård. AI‑omställningen aktualiserar avgörande hälsorättsliga frågor om dataharmonisering, ansvar och hur patientens integritet kan värnas. I takt med att AI blir en del av den offentliga förvaltningens grundläggande infrastruktur befinner sig samtidigt många i en omvälvande förändringstid. För dem som formats av en analog förvaltningskultur väcker ofta tekniken främst frågor om ansvar, insyn och rättssäkerhet. För dem som är uppvuxna i ett digitalt ekosystem framstår AI som en självklar del av juridisk praktik. I skärningspunkten mellan dessa perspektiv formas förutsättningarna för framtidens hälsorätt.

Inledning

Digitaliseringen av offentlig sektor har gått från verktygsutveckling till infrastrukturell omvandling. AI framställs av regering och myndigheter som avgörande för att effektivisera och modernisera samhällsservice.[1] Enligt Myndigheten för digital förvaltning (Digg) är ”etisk AI är lika med ansvarsfull AI”.[2] Sveriges digitaliseringsstrategi beskriver AI som ett medel för att höja kvaliteten i offentlig verksamhet. Samtidigt konstateras att utvecklingen av data- och AI‑driven offentlig verksamhet kräver stöd och styrning från statliga aktörer.[3] AI-kommissionen framhåller i sin färdplan att det är centralt att offentlig sektor kan använda AI både proaktivt och autonomt inom vissa samhällsviktiga verksamheter.[4] AI Sweden har upprättat en kartläggning över AI-initiativ inom hälso- och sjukvården enligt vilken det i mars 2026 identifierats 197 sådana initiativ.[5] Frågan är alltså inte längre om offentlig sektor ska använda AI, utan hur?

En angelägen fråga är då hur den pågående omställningen inom den offentliga förvaltningen påverkar jurister och vad den innebär för professionen och juridiken framöver. Den juridiska professionen verkar befinna sig i ett skarpt teknikskifte i vilket äldre och yngre perspektiv möts.

Mötet

Förvaltningsrättsligt beslutsfattande har traditionellt utgått från handläggning av individuella ärenden, insyn i beslutsunderlag och en tydlig kedja av mänskligt ansvar, ett beslutsfattande som den äldre generationens jurister har skolats in i. När nu myndigheter använder AI innebär det en strukturell förändring i hur beslutsstöd skapas och används. Ur ett äldre perspektiv kan detta upplevas metodologiskt främmande. Yngre jurister och juriststudenter rör sig däremot ofta mer obehindrat i ett digitalt ekosystem där språkmodeller, appar och datadrivna tjänster redan är en naturlig del av vardagen. Studenter ber språkmodeller att sammanfatta rättsfall, förklara innebörden av principer och begrepp och istället för familj eller kurskamrater får samma modeller korrekturläsa uppsatser och inlämningar.[6] Vi ser även hur AI-verktyg alltmer omfamnas av jurister världen över i deras rådgivningsverksamhet.[7] Varför ska då offentlig verksamhet halka efter och inte nyttja dessa effektiva verktyg?

Här uppstår en risk i att det fostras jurister som effektivt kan nyttja tekniken, men inte nödvändigtvis förstår de rättsstatliga mekanismer som kan utmanas av den. Det finns även en upplevelse vi båda delar av att många har ett relativt okritiskt förhållningssätt till AI och det innehåll som kan skapas. En observation med visst stöd i forskning, där begreppet ”cognitive surrender” nyligen har myntats.[8] Språkmodellers hallucinationer omnämns ofta, en problematik som redan aktualiserats inom svensk rättskipning.[9]

Det vore förenklat att beskriva detta som en motsättning mellan teknikskepsis och teknikoptimism. Snarare handlar det om olika reflexer. Den ena betonar ansvar, spårbarhet, motivering och rättssäkerhet. Den andre ser det som en självklarhet att den offentliga förvaltningen ska nyttja effektiva verktyg för att tillgodose en effektiv verksamhet och service.

Regeringen går till synes en sorts balansgång mellan båda läger. Exempelvis anges hälso- och sjukvården där en mer utbredd användning av AI kan möjliggöra att personalen spenderar mer tid åt patientmöten och andra människonära uppgifter. Samtidigt är strategin förankrad i en ansvarsfull användning. Data ska hanteras säkert och korrekt, etiska principer och värdegrunder ska beaktas och beslutsstöd ska utformas med transparens, likabehandling och spårbarhet.[10]

Ena systemförändrande

Ena syftar till att skapa gemensamma nationella ramverk för offentlig sektor. Visionen för 2030 är en offentlig sektor som arbetar genom datadrivna metoder och där invånare ska kunna möta enkel och trygg digital service. Inom ramen för denna utveckling föreslås även ett anslutnings- och innovationscenter som ska ha särskilt fokus på AI och data, där myndigheter ska kunna testa olika lösningar i kontrollerade miljöer.[11] Digitaliseringen av den offentliga sektorn talas idag om som en etablering av en digital infrastruktur.[12]

Den äldre generationens jurister frågar antagligen hur en sådan centralisering av utvecklingen av offentlig sektor i Sverige ska upprätthålla centrala värden såsom autonomi, ansvarsfördelning och insyn. Även om centraliserade system genererar hög effektivitet, är samtidigt en uppdelad och närmast fragmenterad förvaltning något som traditionellt har setts som ett skydd mot systemfel. AI i storskaliga, nationella plattformar riskerar att framkalla tankar om nya riskbilder. För den yngre generationen skulle sådana satsningar istället kunna framstå som ett rationellt och nödvändigt steg mot modern och likvärdig offentlig service.

Just i spänningsfältet mellan dessa synsätt uppstår centrala hälsorättsliga frågor. Det handlar inte minst om frågor om hur data ska standardiseras och om vem som ansvarar när en AI‑modell orsakar fel. Ytterligare en central fråga är i vilken utsträckning som centraliserade datastrukturer är förenliga med patientens integritet och självbestämmande. En av regeringen tillsatt utredare har i en delrapport lagt fram förslag på en harmonisering av data inom hälso- och sjukvården.[13] Här återfinns förslag på ny lagstiftning som ska möjliggöra informationsutbyte på området.[14] För att säkerställa en lyckad omställning till en ny digital infrastruktur på hälso- och sjukvårdens område krävs en tydlig ansvarsfördelning. Nyligen kom SOU 2026:6 i vilket förslag på vilka nya roller som krävs och hur ansvarsfördelningen bör se ut läggs fram. Syftet med reformen bär en patientcentrerad prägel.[15] Samordnaren pekar på vikten av att införandet förutsätter ett ledarskap med en förmåga att fatta svåra beslut, effektivt prioritera samt skapa förutsättningar för samverkan, men även behovet av en samordning av användningen av AI inom vården och det pågående arbetet med den digitala infrastrukturen.[16]

AI i myndighetsutövning – från handläggning till prediktion

Offentliga myndigheter använder redan AI i operativ verksamhet, vilket illustrerar omställningens bredd. Försäkringskassan ser effektivare handläggning och förbättrat beslutsstöd som kärnnyttor, men identifierar samtidigt risker såsom diskriminering och komplexiteten i EU:s nya AI‑förordning.[17] Pensionsmyndigheten och Skatteverket använder både prediktiva modeller och generativ AI i sin verksamhet.[18]

Beslutsstöd som bygger på AI inom hälso- och sjukvården kan påverka allt från diagnos till prioritering. Här möter AI den del av juridiken där konsekvenserna är mest kännbara: den fysiska och psykiska hälsan. E‑hälsomyndigheten beskriver både potentialer och hinder med AI i vården. Den svenska förvaltningsmodellen, med decentraliserat beslutsfattande, ger upphov till variationer i implementeringen. Otydliga och komplexa regelverk försvårar även utvecklingen av nationellt enhetliga lösningar.[19]

Den traditionella förvaltningskulturen kan i högre utsträckning peka på risker med ersättning av klinisk erfarenhet med statistiska modeller, kanske främst ur ett rättssäkerhetsperspektiv. Samtidigt erbjuder införandet av vissa digitala lösningar en möjlighet att främja en jämlik vård och en förbättring av riskidentifiering.

Avslutande ord

AI verkar vara vår tids gemensamma projekt över generationsgränserna, så även inom juridiken. Inom offentlig sektor och hälso- och sjukvård innebär det förändringar som påverkar både professionen, utbildningen och våra rättsliga normer.

Med ett generationsperspektiv ser vi två strimmor: den äldre juristgenerationens betoning på rättssäkerhetsfrågor, ansvarsfördelning, systemrisker och förvaltningsetik som avgörande värden för att kunna identifiera och förebygga rättsliga och demokratiska risker. Den yngre juristgenerationen efterfrågar systemarkitektur och digital interoperabilitet som nödvändiga förutsättningar för att utveckla framtidens reglering.

Utvecklingen inom hälsorätten formas av samspelet mellan dessa generationer. I en tid då AI blir en integrerad del av beslutsfattande i offentlig förvaltning behöver juridiken både stå stadigt och förnyas. Det gör generationsperspektivet till en praktisk förutsättning för en ansvarsfull och rättssäker digital framtid inom bland annat hälso- och sjukvården. Därför blir vår slutsats att den snabba teknikutvecklingen inte kräver mindre juridik – utan mer, i nya former och genom ett gemensamt arbete över generationerna.


* Titti Mattsson är professor i offentlig rätt vid Juridiska fakulteten, Lunds universitet.

Noah Löfqvist är fil.kand. och juris studerande vid Juridiska fakulteten, Lunds universitet.

Författarna deltar i WASP: Wallenberg AI, Autonomous Systems and Software Program The Automated State med säte vid Lunds universitet.

[1] Regeringen, Sveriges AI-strategi, tillgänglig på <https://www.regeringen.se/regeringens-politik/sveriges-ai-strategi/> (besökt den 1 april 2026).

[2] AI för offentlig förvaltning (2016), Använd generativ AI på ett etiskt sätt, den 2 mars 2026 tillgänglig på <https://www.digg.se/ai-for-offentlig-forvaltning/riktlinjer-for-generativ-ai/anvand-generativ-ai-pa-ett-etiskt-satt> (besökt den 1 april 2026).

[3] Regeringen (2025), Sveriges digitaliseringsstrategi 2025–2030, den 28 maj 2025, s. 26 f, tillgänglig på <https://www.regeringen.se/contentassets/fe3e296228fb474f803a986ae3842b4c/sveriges-digitaliseringsstrategi-20252030.pdf> (besökt den 2 april 2026).

[4] Statens offentliga utredningar (SOU) 2025:12 AI-kommissionen, Färdplan för Sverige.

[5] AI Sweden Vårdkartan – Utforska AI-initiativ inom vårdsektorn, tillgänglig på <https://vardkartan.ai.se/> (besökt den 2 april 2026).

[6] Ansvarsfull och kreativ användning av generativ AI uppmuntras exempelvis av Lunds universitet, Lunds universitet (2025), Policy med principer för användning av generativ AI inom Lunds universitet, beslutad av rektor den 11 december 2025, dnr STYR 2025/3053.

[7] Se bl.a. https://legora.com/; https://www.harvey.ai/.

[8] Se bl.a. Shaw, Steven D. & Nave, Gideon (2026), Thinking—Fast, Slow, and Artificial: How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender, 11 januari 2026, tillgängligt på <https://doi.org/10.31234/osf.io/yk25n_v1> (besökt den 2 april 2026). I vilken författarna myntar begreppet ”Cognitive surrender”.

[9] Eskilstuna tingsrätt, dom den 12 december 2025 i mål nr T 667-25.

[10] Regeringen (not 2).

[11] Myndigheten för digitalförvaltning (2025), Förslag till långsiktig utveckling och förvaltning av Ena, den 31 januari 2025, s. 10 och  33<https://www.digg.se/download/18.5dc93d131948db30f8dc89/1738837000456/Rapport_F%C3%B6rslag%20till%20l%C3%A5ngsiktig%20utveckling%20och%20f%C3%B6rvaltning%20av%20Ena.pdf> (besökt den 2 april 2026).

[12] Se bl.a. regeringens uppdrag, Uppdrag att möjliggöra en nationell digital infrastruktur för hälsodata (S 2024:A).

[13] Hälsodata ska vara tillgängliga i hela vårdkedjan. Sveriges riksdag (2024) Förslag till lag om nationell infrastruktur och tjänster för elektroniskt informationsutbyte på hälso- och sjukvårdsområdet (Delrapport 4, S2024:A).

[14] Se Sveriges riksdag (2024) a.a.

[15] SOU 2026:6 En nationell digital infrastruktur i hälso- och sjukvården: Styrning med tydliga roller och ansvar för aktörerna, s. 34.

[16] SOU 2026:6 s. 56.

[17] Försäkringskassan (2025), Försäkringskassans arbete med artificiell intelligens (Rapport), s. 4–5 och 10–14.

[18] Pensionsmyndighet (2025), Redovisning av Pensionsmyndighetens arbete med artificiell intelligens Rapport den 12 juni 2025, s. 8; Skatteverket, Skatteverkets arbete med AI, tillgänglig på <https://skatteverket.se/omoss/jobbahososs/jobbamedithososs/skatteverketsarbetemedai.4.13948c0e18e810bfa0c6d0c.html?utm_source=chatgpt.com> (besökt den 31 mars 2026).

[19] E-hälsomyndighet (2025) Strategiska utvecklingsområden för fortsatt digitalisering av hälsa, vård och omsorg (Årsrapport 2025), s. 27–29.

This entry was posted in

Posts Swedish Health Law

Comments

0 Comments Leave a comment


Health Law Research Centre’s Spring Seminar Series

Dear all,

We are excited to invite you to the upcoming Spring Seminar Series hosted by the Health Law Research Centre! Please see the schedule below. The seminar series will be both insightful and engaging, covering a diverse range of topics at the intersection of health and law. Your active participation will undoubtedly contribute to the success of the series.

If you have any questions, please feel free to reach out to the editors of the Blog or alma.bertilsson@jur.lu.se.

We look forward to seeing you at the Health Law Research Centre’s Spring Seminar Series.

January 26, 2026

This entry was posted in

Events Swedish Health Law

Comments

0 Comments Leave a comment

When AI Becomes a Health Advisor: Why ChatGPT Health Raises New Legal and Public Health Alarms

Det här är en AI-genererad bild. Bilden har tagits fram i AI-verktyget ChatGPT
Det här är en AI-genererad bild. Bilden har tagits fram i AI-verktyget ChatGPT

Ana Nordberg,[1] Petra Holmberg[2] & Sarah de Heer[3]

Introducing ChatGPT Health

On January 7, 2026, OpenAI announced the upcoming launch of ChatGPT Health. Described to consumers as a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared, and confident navigating your health.[4] Immediately, alarm bells have been triggered around the legal and public health implications.[5]

Likely, it will take some time before the tool is widely open to all users/subscribers. The recent announcement instructs ChatGPT subscribers to sign up for a waitlist to access the tool once it becomes available. This means that an undisclosed ‘small number’ of current subscribers (on either ChatGPT Free, Go, Plus, or Pro plans) will be selected as beta users to ‘refine the experience’ and will likely be asked to provide feedback and report problems prior to a broader public launch.

Beta users are restricted to those residing outside of the European Economic Area, Switzerland, and the United Kingdom. Likely due to the need to navigate regulatory constraints imposed by the GDPR, Medical Devices Regulation (MDR) and AI Act (and similar national legislation). However, OpenAI promises to expand access and make Health available to all users on web and iOS in the coming weeks,[6] indicating that we should expect a global launch before the spring.

It is undisputed that, if used responsibly, such tools have the potential to lessen the threshold to seek health information and medical advice, supporting individuals in taking charge of their own health and facilitating disease prevention and self-care. On the other side, this is a worrying development considering the extent of privacy concerns and regulatory issues, including data bias and transparency. This piece briefly identifies and explores areas that raise legal and public health concerns.

Market-driven Tool

Thematically dedicated Large Language Models (LLMs) are a natural evolution of general-purpose tools like ChatGPT. After all, one of the main criticisms of general-purpose tools is their propensity to produce plausible but false statements (‘hallucinations’) and inaccurate results. Smaller, dedicated models are likely to be less problematic, particularly with retrieval-augmented generation (RAG) and fine-tuning.[7] They are trained on more carefully curated and higher-quality data.[8]

Health and wellness topics have been among the most frequently searched since the dawn of the internet. When asked about the proportion of health, health-related and wellness searches and keywords since its launch in 1998, Google AI mode informed that in the early years, health queries represented a very small fraction of daily searches globally. However, the weight of health and health-related or wellness searches has steadily increased. Today, more than 1 in every 20 searches is health-related.[9] A 2023 study estimated that by 2002, 4.5% of all web searches were health-related.[10]  In recent years, this number has increased, stabilising at approximately between 5% to 7% of all daily queries.[11]  

Assuming these statistics are accurate, over 70,000 health questions are searched per minute, up to more than 1 billion per day globally. Adding to these impressive numbers are a growing number of searches conducted on other engines and open LLM tools/services such as ChatGPT, as well as the use of apps and connected objects with chat or search functionalities, which are taking an increasingly prominent role and impact in the world’s daily digital life.

In 2025, Google had a 79,88% market share in the search engine market, a decline of 9,33% compared with historical data over the last 10 years. While Google’s market share has remained stable, the data suggest a steady decline in the use of traditional search engines since the release of Bing Chat in 2023-2024 and the subsequent widespread availability of ChatGPT.[12] Since then, Google’s search engine has also integrated AI tools and functionalities, and we predict that the use of LLMs to answer health-related queries is and will continue to be a growing global trend.  For example, OpenAI announces that ‘over 230 million people globally ask health and wellness-related questions on ChatGPT every week’ with health-related prompts featuring prominently in the most common ways users deploy ChatGPT.[13]  

At first glance, this new tool may seem like a safer place for users to get qualified medical advice. Especially given OpenAI’s guarantee that: ’You can securely connect medical records and wellness apps to capture conversations in your own health information, so that responses are more relevant and useful to you.[14] However, regardless of this ‘good intentions’ rhetoric, the product features and announced safeguards give rise to more questions than answers.

Privacy Concerns

In its initial rollout, the product will not be accessible within the European Union. Early observations suggest that geo-blocking is in place, with EU/EEA users receiving notifications that the service is unavailable in their region. Such restrictions, however, are easily passed and are likely to be temporary. Although the timeline remains uncertain, the overall trajectory points toward a global launch and eventual integration with existing and emerging digital health products and services.

A central feature of the tool is user-driven personalisation. Individuals will be able to link the system to their medical records and to data streams from wellness applications such as Apple Health, Function, and MyFitnessPal.[15] The company claims that ChatGPT Health will incorporate ’additional, layered protections designed specifically for health—including purpose built‑ encryption and isolation to keep health conversations protected and compartmentalised.’ Yet the mechanisms through which these protections will operate remain opaque. It is also unclear whether the system will retain pseudonymised or anonymised data, and whether an EU/EEA-specific version will meet the requirements of the GDPR.

At present, the company is adamant that health data uploaded by users or accessed will not be used to train the general ChatGPT model. Nonetheless, the possibility remains that, after the 30-day window during which users may correct or delete their data, such information could be used to train ChatGPT Health itself.[16] The intention to leverage the general ChatGPT model’s chat to enhance the accuracy and relevance of ChatGPT Health’s outputs raises significant concerns. If the system incorporates wellness and medical data of uncertain origin, accuracy or reliability, users may be exposed to assumptions, flawed inferences, and potentially harmful guidance.[17] This risk extends beyond data protection to the safety and well-being of users.

Another unresolved issue concerns data storage. It is not yet known whether personal health data will remain stored on users’ devices or be transferred to company servers, the locations of which have not been disclosed.

Finally, users engaging with the general ChatGPT interface may be redirected to the health-specific product when raising medical queries. This strategy may be intended to address ongoing criticism of the handling of sensitive personal data, but it also underscores the need for transparency, regulatory oversight, and robust safeguards before such a system becomes widely available.

Certification, Data Bias and Transparency

A concern even more pressing than protecting sensitive data is the nature of the health‑related information the system will provide to users. Although the company emphasises that the product has been developed with input from healthcare professionals. The tool plainly falls within the definition of a medical device set out in Article 2 of the MDR.[18] However, the company simultaneously disclaims any diagnostic function. According to the developer, ChatGPT ’Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment.’[19] Such disclaimers appear strategically positioned to avoid classification as a medical device and, by extension, designation as a high‑risk AI system under the EU AI Act.[20]  

If ChatGPT Health were recognised as both a medical device and a high‑risk AI system, the stringent requirements concerning data quality and bias mitigation set out in the AI Act would apply.[21] These safeguards are particularly crucial in the health domain, where biased or incomplete data can lead to erroneous or unsafe outputs.[22] The risk of inaccurate or misleading advice is heightened by the system’s likely reliance on medical records and data from wellness applications. Such apps frequently contain imprecise or user‑generated measurements, raising questions about reliability.[23] Although the developer asserts that only ‘certified’ applications will be integrated, no criteria for such certification have been disclosed.[24]

Even if ChatGPT Health ultimately avoids classification as a high‑risk AI system, it is likely to fall under the classification rules for General Purpose AI Model (GPAI) and may even be considered a system with systemic risk. It will thus be subject to the specific obligations of model evaluation, documentation, and risk mitigation under Articles 53 to 55 of the AI Act as they enter into force.[25]

The transparency obligations under Article 50 of the AI Act will also remain fully applicable. Users must be clearly informed that they are interacting with an AI system rather than a healthcare professional. However, the format, tone, and level of detail provided by ChatGPT Health may blur this distinction in practice. There is a genuine risk that users could interpret the system’s responses as professional medical guidance, potentially leading to self‑diagnosis or self‑medication despite the developer’s warnings.[26] This raises a broader concern: could ChatGPT Health inadvertently contribute to poorer health outcomes by offering inaccurate, incomplete, or overconfident advice while simultaneously discouraging users from seeking timely medical care?

Conclusion: Digital Public Health’s Bleary New Future(s)

ChatGPT Health and similar health-dedicated LLMs will generate a global impact on public health. We predict that such an impact on digital determinants of health will yield both positive and negative outcomes. Such tools will increase the availability of health information; facilitate health care decisions; and empower users to make healthier choices regarding health determinants or adopt better self-care measures.

On the other hand, more information is not always a proxy for good decisions, and there are significant privacy trade-offs in granting LLMs models access to individual health data. Digitalisation has also highlighted well-known equity imbalances in accessibility. The use of LLMs in health also raises the risk of a health infodemic, challenging public health communication and the trust relationship between patients and health providers. It is unknown to what extent such LLMs will be able to avoid ‘hallucinations’ and inaccuracies, and whether they will unintentionally contribute to the spread of disinformation, unproven alternative cures, and dangerous trends.

The impact on privacy and other fundamental rights will be significant, and the question remains unanswered: to what extent do current legal and regulatory frameworks provide sufficient protection?


[1] Ana Nordberg is an Associate Professor and senior lecturer at the Department of Law, Lund University.

[2] Petra Holmberg is a postdoctoral researcher at the Department of Law, Lund University.

[3] Sara De Heer is a doctoral candidate at the Department of Law, Lund University.

[4] https://openai.com/index/introducing-chatgpt-health/

[5] Mahon, L. (2026). OpenAI Launches ChatGPT Health to review your Health Records. BBC. https://www.bbc.com/news/articles/cpqy29d0yjgo;

Probets, J. (2026). The launch of ‘ChatGPT for health’ is both a threat and an opportunity. HSJ. https://www.hsj.co.uk/technology-and-innovation/the-launch-of-chatgpt-for-health-is-both-a-threat-and-an-opportunity/7040792.article;

Moreau, C. (2026). ChatGPT gears up to tap into users’ health information. Euroactiv. https://www.euractiv.com/news/chatgpt-gears-up-to-tap-into-users-health-information/

[6] Id. 1

[7] Kalai, A. T., Nachum, O., Vempala, S. S., & Zhang, E. (2025). Why language models hallucinate. arXiv. https://doi.org/10.48550/arXiv.2509.04664

[8] Gautam, A.R. (2025). Impact of High Data Quality on LLM Hallucinations’ International Journal of Computer Applications (0975 – 8887). 187 (4). https://www.ijcaonline.org/archives/volume187/number4/gautam-2025-ijca-924909.pdf

[9] Google AI mode, 16 January 2016.

[10] Gunther, E. & Kohler, Ch. (2003). What is the prevalence of health-related searches on the World Wide Web? Qualitative and quantitative analysis of search engine queries on the Internet. AMIA. Annual Symposium proceedings / AMIA Symposium. AMIA Symposium. 225-9.

[11] Dress, J. (2019). Google receives more than 1 billion health questions every day. Becker’s Hospital Review. https://www.beckershospitalreview.com/healthcare-information-technology/google-receives-more-than-1-billion-health-questions-every-day/ 

[12] Cardillo, A. (2025). How Many Google Searches Are There Per Day? https://explodingtopics.com/blog/google-searches-per-day

[13] https://openai.com/index/introducing-chatgpt-health/

[14] Id.10

[15] Id.10

[16] Id.10

[17] See e.g. Audrey Eichenberger, Stephen Thielke, Adam Van Buskirk. A Case of Bromism Influenced by Use of Artificial Intelligence. AIM Clinical Cases.2025;4:e241260. [Epub 5 August 2025]. doi:10.7326/aimcc.2024.1260

[18] Article 2(1) of the Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC. OJ EU, L 117, 5 May 2017, pp. 1–175.

[19] Id.

[20] Medical device is classified as high-risk AI based on the Article 6(1) of the Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828. OJ EU, L 168, 12 July 2024; Case C‑219/11 Brain Products GmbH v BioSemi VOF and Others [2012] ECLI:EU:C:2012:742.

[21]  Chapter III  AI Act

[22] Abdelwanis, M., Alarafati, H. K., Tammam, M. M. S., & Simsekler, M. C. E. (2024). Exploring the risks of automation bias in healthcare artificial intelligence applications: A Bowtie analysis. Journal of Safety Science and Resilience, 5(4), 460–469. https://doi.org/10.1016/j.jnlssr.2024.06.001

[23]  Liang, Z. & Ploderer, B. (2020). How Does Fitbit Measure Brainwaves: A Qualitative Study into the Credibility of Sleep-tracking Technologies. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4 (1), 17-29. https://doi.org/10.1145/3380994

[24] https://openai.com/index/introducing-chatgpt-health/

[25] see also: ANNEX to the Communication to the Commission Approval of the content of the draft Communication from the Commission – Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act).

[26] Du, D., Paluch, R., Stevens, G., & Müller, C. (2024). Exploring patient trust in clinical advice from AI‑driven LLMs like ChatGPT for self‑diagnosis. arXiv:2402.07920.

January 23, 2026

This entry was posted in

Posts Swedish Health Law

Comments

0 Comments Leave a comment

Patientskadeersättning vid skador orsakade av AI-baserade medicintekniska produkter

Det här är en AI-genererad bild. Bilden har tagits fram i AI-verktyget Microsoft Copilot

av Linnea Sitell*

Inledande ord

Den ökade användningen av artificiell intelligens inom sjukvården väcker flera rättsliga frågor. En av dessa är hur patientskadeersättning ska hanteras vid patientskador orsakade av AI-baserade medicintekniska produkter (MTP), exempelvis när dessa används för att analysera mammografibilder. Även om AI-tekniken visat på god förmåga att identifiera tecken på bröstcancer är det oundvikligt att fel någon gång uppstår.[1] När så sker, och en patient får en felaktig diagnos, behöver rätten veta hur skadan ska hanteras.

Förordningen om medicintekniska produkter och AI-förordningen

AI-baserade MTP omfattas av förordningen om medicintekniska produkter (MDR), som ställer höga krav på bland annat säkerhet, prestanda och riskhantering. MDR är dock utformad med mer ”klassisk” medicinteknik i åtanke och är inte specifikt anpassad efter AI-baserade produkters särdrag och de risker som användningen av AI medför.[2]

AI-system kännetecknas av en låg grad av transparens, ofta benämnt som ”svarta-lådan-problemet”.[3] Detta gör att det i praktiken kan vara omöjligt att förstå hur en viss in-data (exempelvis en mammografibild) resulterar i ett visst resultat eller utdata (exempelvis en diagnos).[4]

Med AI-förordningen tillkommer regler som tar sikte på de risker som AI-teknik medför. Bland de mest centrala krav som tillkommer på AI-baserade medicintekniska produkter i den högsta riskklassen finns exempelvis krav på transparens och krav på loggning.  Krav på loggar, dokumentation och transparens är centrala för att kunna utvärdera systemets funktion och identifiera eventuella fel.

Patientskadeersättning vid skador orsakade av en AI-baserad medicinteknisk produkt

När en patientskada orsakas av en medicinteknisk produkt kan ersättning ges via vårdgivarens obligatoriska patientskadeförsäkring i enlighet med patientskadelagen (PskL).

I ett scenario där en AI-baserad medicinteknisk produkt används för att analysera mammografibilder efter misstänkta tecken på bröstcancer och ger ett felaktigt negativt resultat är det framför allt två skadetyper i patientskadelagen som aktualiseras:

  • Diagnosskada 6 § 3 p.  En diagnosskada föreligger när en felaktig diagnos ställs, trots att de tecken på bröstcancer som fanns vid diagnostillfället borde ha upptäckts av en erfaren specialist. Därtill krävs att den felaktiga diagnosen medfört skada för patienten.  
  • Skada orsakad av medicinteknisk produkt (materialskada) 6 § 2 p. En materialskada föreligger om en säkerhetsbrist hos en medicinteknisk, exempelvis AI-baserad, produkt orsakat en skada hos en patient.[5]

Om den AI-baserade medicintekniska produkten klassas som en ”produkt” enligt produktansvarslagen (PAL) kan tillverkaren bli ansvarig, detta under förutsättning att en säkerhetsbrist kan påvisas.

Ett problem för skadade patienter är dock att både PskL och PAL är utformade för produkter och situationer som är betydligt mer transparenta än dagens AI-verktyg.[6] Detta innebär att patienter kan möta svårigheter att bevisa förekomsten av ett orsakssamband mellan en uppkommen skada och den AI-baserade medicintekniska produkten. Ytterst blir konsekvensen för patienterna att möjligheterna till ersättning blir mindre.

Utmaningar vid tillämpningen av patientskadelagen på skador orsakade av AI-baserade MTP

Somkonstateras ovan kommer med AI-förordningen regler som syftar till att förebygga uppkomsten av skador orsakade av AI-baserade MTP. Samtidigt finns regler i patientskadelagen som anger hur ett ersättningsanspråk ska hanteras när en skada väl inträffat. Att PskL inte är anpassad efter medicintekniska produkter med en mycket låg grad av transparens (så som exempelvis AI-baserade sådana) innebär en utmaning för en skadelidande patient, vilka kommer att redogöras för i det följande.

Det centrala problemet för patienten är bevisbördan. För att kunna få ersättning för en diagnosskada enligt PskL måste patienten bevisa att, exempelvis tecken på bröstcancer, vid tidpunkten för undersökningen var upptäckbara för en erfaren specialist. För att ersättning för en materialskada ska bli aktuell krävs att det funnits ett fel i AI-verktyget och att detta fel orsakat skadan (exempelvis en missad diagnos).[7]

Här blir ”svarta-lådan-problemet” särskilt tydligt. AI-system, särskilt de som bygger på djupinlärning och neurala nätverk, kan ge såväl korrekta som felaktiga svar utan att det är möjligt att efteråt kontrollera vilka överväganden som resulterat i den aktuella bedömningen. För patienten, som enligt patientskadelagen har bevisbördan, innebär detta mycket stora svårigheter att bevisa att systemet faktiskt gjort ett fel, och ännu svårare att bevisa att det finns ett orsakssamband mellan felet och exempelvis en missad diagnos.[8] Resultatet blir att möjligheten till ersättning i praktiken är mycket begränsad.

Svarta-lådan-problematiken medför även utmaningar specifikt relaterade till Pskl 6 § 3 p.  Diagnossakada. Eftersom AI-produkten kontinuerligt lär sig och utvecklas samtidigt som den har en låg grad av transparens får det antas vara mycket svårt för en skadelidande patient att bevisa vad en AI-baserad MTP hade för kunskap eller borde ha haft för kunskap vid diagnostillfället.  

Att AI-produkter fungerar som svarta lådor medför utmaningar även i relation till Pskl 6 § 2 p. Materialskada. AI-produkters komplexitet och fortlöpande utveckling medför svårigheter för skadelidande att bevisa den AI-baserade MTP:ns funktion, och eventuella säkerhetsbrist, vid tillfället för skadans inträffande.

Nya EU-regler om ansvar för skador orsakade av AI-produkter

De utmaningar som diskuteras ovan relaterar i huvudsak till AI-produkters brist på transparens, vilket medför svårigheter för den skadelidande att bevisa exempelvis orsakssamband.

På EU-nivå har ett reviderat produktansvarsdirektiv (rPLD) tagits fram. rPLD syftar bland annat till att anpassa produktansvarsreglerna efter den tekniska komplexitet som många av dagens produkter har.[9] Detta direktiv innehåller regler om utlämnande av relevant bevisning samt en presumtion av orsakssamband mellan en säkerhetsbrist och en skada.[10] Den sistnämnda bestämmelsen syftar till att underlätta kärandens bevisbörda i särskilt svårutredda och komplexa fall, vilket kan vara särskilt betydelsefull i situationer då en skada orsakats av en AI-produkt.[11]

Inom EU har även försök gjorts att införa specifika regler för skador orsakade av AI-produkter med hög risk, så som exempelvis AI-baserade medicintekniska produkter. Förslaget till AI-ansvarsdirektiv (AILD) drogs tillbaka i mars 2025.[12] Även AILD föreskriver regler om utlämnande av bevis och presumtion för leverantörens bristande regelefterlevnad vid underlåtenhet att utlämna relevant bevisning. I AILD återfinns också en presumtionsregel som möjliggör för de nationella domstolarna att presumera ett orsakssamband mellan å ena sidan skadan, och å andra sidan AI-modellens underlåtenhet att producera korrekt ut-data. [13]

Förslag för en patientskadelag anpassad efter användningen av AI-baserade medicintekniska produkter

AILD och rPLD tycks erbjuda lösningar på flera av de utmaningar som ovan identifierats som centrala för när en patient skadats av en AI-baserad MTP. Som konstateras ovan har dock AILD dragits tillbaka. Samtidigt har rPLD trätt i kraft och inom de närmsta åren implementeras det i svensk rätt. rPLD omfattar endast skador orsakade av produkter med säkerhetsbrister. Det innebär att en implementering av rPLD i svensk rätt enbart får betydelse i de fall det rör sig om en skada orsakad av en säkerhetsbrist. Vid andra skador, exempelvis diagnosskador eller materialskador enligt PskL, skulle en implementering av rPLD inte innebära någon skillnad mot dagens regler.[14]

Med inspiration från AILD och rPLD samt med avstamp i de utmaningar jag identifierat för patienter som orsakats skada av en AI-baserad MTP presenterar jag två förslag på hur patientskadelagen skulle kunna anpassas för att stärka patientens möjlighet till ersättning vid skador orsakade av AI-baserade MTP.

Utlämnande av bevis

För att AI-förordningens krav på dokumentation, loggning och transparens ska få verklig betydelse för en skadelidande patient behöver patienten få tillgång till denna dokumentation. Det finns därför ett behov av regler om utlämnande av bevis.

I svensk rätt finns redan vissa regler om processuell edition i rättegångsbalken. Regeringen har under hösten 2024 gett en särskild utredare i uppdrag att undersöka om implementeringen av rPLD i svensk rätt innebär någon utvidgning av editionsplikten.[15] Eftersom rPLD enbart är tillämplig på skador orsakade av säkerhetsbrister skulle dock en eventuell utvidgning av editionsplikten alltså inte nödvändigtvis omfatta även situationer då en skada orsakats av ett ”fel” hos en AI-baserad MTP enligt 6 § 2p. PskL. Detta eftersom begreppet säkerhetsbrist i patientskadelagen är bredare än samma begrepp i PAL.[16] Mot bakgrund av de utmaningar som föreligger i förhållande till 6 § 2 och 3p. PskL anser jag det är påkallat att liknande regler om utlämning av bevis ska införas även i förhållande till diagnos- och materialskador. Detta är nödvändigt för att ge en skadad patient en reell möjlighet att bevisa sin skada.  

För att reglerna om utlämning av bevis (i den mån de innebär en utvidgning av editionsplikten) ska få betydelse vid en prövning enligt PskL behöver reglerna föreskriva en möjlighet för domstolen att förelägga även andra än parterna att utlämna bevis. Detta motsvarar regeln i förslaget till AILD och innebär att en leverantör eller användare av en AI-produkt kan föreläggas att lämna ut material.

Presumtion för orsakssamband

Den så kallade ”svarta-lådan problematiken” får som konsekvens att orsakssamband och fel hos en AI-baserad MTP är närmast omöjliga för patienten att bevisa. I nya rPLD och det föreslagna AILD stadgas presumtionsregler för att underlätta bevisbördan för den skadelidande. Eftersom presumtionen är motbevisbar innebär det att bevisbördan i praktiken blir omvänd. Ur ett patientperspektiv är detta positivt. Det är också rimligt med hänsyn till AI-produkters tekniska komplexitet, eftersom en tekniskt kunnig tillverkare rimligen har bättre möjligheter än en patient att uppfylla beviskravet. Jag anser därför att en presumtionsregel liknande den som föreslås i AILD och rPLD behövs i patientskadelagen för att anpassa lagstiftningen till användningen av AI-baserade MTP.


* Linnea Sitell är jur. kand (Lunds universitet). Detta inlägg bygger på Linneas examensarbete “Patientskadeersättning i den artificiella intelligensens tidsålder – En rättsdogmatisk undersökning av säkerhetskrav, rättsliga utmaningar och reformbehov vid användningen av AI-baserade medicintekniska produkter” (2025).

[1] Smer (2020) ’Kort om Artificiell intelligens i hälso- och sjukvården’ https://smer.se/wp-content/uploads/2020/06/smer-2020-2-kort-om-artificiell-intelligens-i-halso-och-sjukvarden.pdf [hämtad 2025-10-13] s. 5 ff. och Karolinska Institutet (2025) AI istället för röntgenläkare i bröstcancervården? – AI inom medicin och hälsa. Publicerad ursprungligen i Medicinsk vetenskap nr 3/2020. https://ki.se/forskning/popularvetenskap-och-dialog/popularvetenskapliga-teman/tema-ai-inom-medicin-och-halsa/ai-istallet-for-rontgenlakare-i-brostcancervarden Senast uppdaterad 9 december 2020 [hämtad 2025-10-12].

[2] Ebers, Martin. (2024) ‘AI Robotics in Healthcare between the EU Medical Device Regulation and the Artificial Intelligence Act’, Oslo Law Review, 11(1), s. 5 f.

[3] Integritetskyddsmyndigheten (2024) ’Svarta lådan och rätten till information’ https://www.imy.se/verksamhet/dataskydd/innovationsportalen/vagledning-om-gdpr-och-ai/gdpr-och-ai/svarta-ladan-och-ratten-till-information/#:~:text=Den%20svarta%20l%C3%A5dan%20och%20informationsplikten,-Den%20metod%20som&text=Metoden%20kan%20behandla%20stora%20m%C3%A4ngder,personer%20att%20kartl%C3%A4gga%20och%20f%C3%B6rst%C3%A5. [hämtad 2025-02-18].

[4] R. Sullivan Hannah och J. Schweikart Scott (2019) ’Are current tort liability doctrines adequate for addressing injury caused by AI?’ AMA journal of etichs. Vol. 21 Nummer 2. s. 160.

[5] Den tidpunkt som är relevant vid bedömningen av om ett fel föreligger är tidpunkten då skadan inträffade.

[6] Mello, Michelle och Guha, Nell (2024) Understanding Liability Risk from Using Health Care Artificial Intelligence Tools. N Engl J Med. Jan 18;390(3) s. 271 och Läkemedelsverket (2023) s. 6 f.

[7] PskL 6 § 2 och 3p.

[8] R. Sullivan Hannah och J. Schweikart Scott (2019) ’Are current tort liability doctrines adequate for addressing injury caused by AI?’ AMA journal of etichs. Vol. 21 Nummer 2. s. 160 och Integritetskyddsmyndigheten (2024) ’Svarta lådan och rätten till information’ https://www.imy.se/verksamhet/dataskydd/innovationsportalen/vagledning-om-gdpr-och-ai/gdpr-och-ai/svarta-ladan-och-ratten-till-information/#:~:text=Den%20svarta%20l%C3%A5dan%20och%20informationsplikten,-Den%20metod%20som&text=Metoden%20kan%20behandla%20stora%20m%C3%A4ngder,personer%20att%20kartl%C3%A4gga%20och%20f%C3%B6rst%C3%A5. [hämtad 2025-02-18] .

[9] Se skäl 3 till PLD och Botero Arcila, Beatriz. (2024) ‘AI liability in Europe: How does it complement risk regulation and deal with the problem of human oversight?’, Computer Law & Security Review, vol. 54 s. 9.

[10] Artikel 9 och 10 rPLD.

[11] De Bruyne och Schollaert (2025) ‘Article 2 (9) Definition: duty of care’ i Pehlivan, Ceyhun Necati, Forgó, Nikolaus & Valcke, Peggy (red.) (2025) AI governance and liability in Europe. Alphen aan den Rijn: Wolters Kluwer s. 316.

[12] Bilagor till Commission Work Programme 2025: Moving Forward Together – A Bolder, Simpler, Faster Union, COM(2024) 700 final. s. 26.

[13] Artikel 3 AILD och Florence Danis, Gert-Jan Hendrix och Jasper Bellon (2025) ’Article 3 disclosure of evidence and rebuttable presumption of non-compliance’ i AI governance and liability in Europe (2025) Alphen aan den Rijn: Wolters Kluwer s. 345 f.

[14] Jfr artikel 5 rPLD.

[15] Dir. 2024:127 Nya regler om produktansvar.

[16] Jfr. prop. 1995/96:187 Patientskadelag m.m. s. 38 f. och Espersson, Carl & Hellbacher, Ulf (2016). Patientskadelagen: en kommentar m.m.. Stockholm: Carl Espersson s. 93.

November 4, 2025

This entry was posted in

Posts Swedish Health Law

Comments

0 Comments Leave a comment

When Mental Illness Meets Sweden’s Sickness Benefit System – Working Capacity Assessments

Lena Enqvist*

In Sweden, mental illness has become the leading cause of sick leave, most often involving so-called common mental disorders such as depression, anxiety, and stress-related conditions.[1] Despite being widespread, these disorders pose particular challenges within the sickness insurance system, especially where medical and legal uncertainties intersect. One reason is that eligibility criteria for sickness benefits (sjukpenning) are built on the assumption that medical, social, and other factors affecting work capacity can be separated and clearly defined. In cases of mental illness, however, such distinctions are rarely straightforward, since causal links and boundaries are often blurred.

Medical and Legal Uncertainties

From a medical perspective, psychiatric disorders often lack the clear biomarkers and test results available in many somatic conditions. Diagnostics rest on symptom clusters rather than underlying mechanisms, and the same diagnosis can manifest very differently across individuals, sometimes making prognosis and assessment of work capacity a challenging task.[2]

From a legal perspective, difficulties stem from the fact that a diagnosis alone is insufficient for entitlement to sickness benefits in the Swedish Social Insurance Code. Coverage is broad, but it only applies where the illness leads to a reduction in work capacity of at least 25 percent. Benefits may then be granted in fixed steps of 25, 50, 75, or 100 percent.[3] These criteria assume that it is possible to separate the degree of capacity loss attributable to illness from labour market, social, or personal factors, and that both the extent and causation of the reduction can be determined. In psychiatric cases, however, that assumption is often fragile.

Disease and Reduced Work Capacity as Interlinked Criteria

In the eligibility assessment, the fundamental criteria of “disease” and “reduced work capacity” in the Swedish Social Insurance Code are thus central.[4] The concept of “disease” rests on a medical foundation, with possible extension to states that, in ordinary language, deviate from the normal life process.[5] Another settled principle is that the loss of working capacity is to be assessed individually, and not against a hypothetical healthy or advantaged person.[6] In psychiatric cases this is crucial as one person with coping strategies and social support may remain able to work, while another with the same diagnosis but lacking such resources may not. As will be seen, such variations expose the tension between the requirement of individual assessment and the tendency to subject work capacity assessments to greater standardisation.

Unlike “disease,” the legal notion of “work capacity” in the Social Security Code is not tied to an external scientific taxonomy.[7] Instead, it sets criteria that establish shifting reference points for assessment, which vary with the length of the individual’s sick leave. Without detailing these reference points, the group of work tasks that the individual’s working ability is assessed against broadens over time – from the person’s own job to the abstract labour market.[8] As a result, both the required degree of incapacity increases and the assessment becomes more abstract. This is most pronounced at the final stage, as a main rule after 180 days of sick leave, when capacity is tested against “normally occurring” work.[9] This category includes ordinary jobs where the worker’s capacity can be used fully or nearly fully, with normal performance demands and little or no accommodation for functional limitations or medical problems. A person deemed able to perform such work is not entitled to sickness benefit, regardless of job availability.[10] The Supreme Administrative Court has clarified that this assessment requires reality-based, case-by-case evaluations that take account of labour-market conditions and their development.[11] With regards to psychiatric diagnoses, however, a particularly difficult but still unresolved issue is the tolerance level for impairments of mental function – such as stress sensitivity, slower pace, irritability, or lack of initiative – that is compatible with such “normally occurring” work.

High Reliance on Extra-legal Tools for Work Ability Assessments

Although “disease” and “work capacity” are formally distinct criteria, they are closely intertwined – medically, since it is often difficult to disentangle influencing factors or to quantify the degree of reduced capacity, and legally, since the causation requirement ontologically links the disease to the loss of work capacity within the assessment framework. The Swedish Social Insurance Code, however, provides little guidance on how such assessments should be conducted or how causality should be established in individual cases. To compensate, the Agency has introduced extra-legal tools that supply greater detail and standardisation in work capacity assessments. While this may promote consistency, it simultaneously restricts individualised evaluation, creating a tension with the variability of disease trajectories and leaving persistent evidentiary difficulties.

One such tool for streamlining the work capacity assessment that the Swedish Social Insurance Agency (Försäkringskassan) has used since 2009, is the so-called DFA chain.[12] The chain lacks a normative legal basis and is described as a methodological tool for case officers to assess and link the diagnosis, D, to a functional impairment, F, which, in turn, leads to a concrete activity limitation, A. In cases of mental illness, however, the line between impairment and activity limitations is often hard to draw, since the impairment typically becomes visible in everyday life and work – features that clinical tests and assessments rarely capture.[13] After criticism of the agency’s heavy emphasis on the DFA-chain in its decision-making practice, the Agency did clarify in its guidance to administrators that a strict logical link between D, F and A is not required by law.[14] While this signalled an easing of some of the rigidity in the Agency’s assessment practice, the DFA chain nevertheless continues to serve as the default framework in eligibility assessments.

Another tool for streamlining work capacity assessments within sickness benefits is the Swedish National Board of Health and Welfare’s (Socialstyrelsen) insurance-medicine guidelines, which build on the DFA chain and advise physicians on typical sick-leave durations by diagnosis.[15] The guidance for psychiatric diagnoses, however, is uneven, as not all conditions are covered by guidelines.

A 2018 Swedish National Audit Office (Riksrevisionen) review, which examined the decision supports specifically for psychiatric conditions, also found that they were applied inconsistently and relied on a diagnosis-specific format poorly adapted to comorbidity, where the primary diagnosis often changes over time.[16] While the current decision supports in use now typically acknowledges comorbidity for psychiatric diagnosis, it rarely offers operational guidance on how interacting conditions should modify recommended sick-leave durations. In Sweden, the guidelines for exhaustion syndrome have been particularly debated, not least because Sweden alone chose in 2005 to include the diagnosis in its nationally modified ICD-10-SE, thereby giving it tailored decision support in sickness certification.[17] This move illustrates how the National Board of Health and Welfare sought to capture stress-related conditions that otherwise risked falling outside established psychiatric categories. The diagnosis as well as the decision support will, however, be removed in 2028 when Sweden adopts the new ICD-11. Individuals currently certified with exhaustion syndrome will then be reclassified and assessed under other supports, and the Board has been tasked with developing new knowledge and decision supports to address the situation created by the removal of the diagnosis.[18] While extra-legal, the guideline alterations that will follow from the reclassification may thus shape entitlement assessments, as differences in the content and recommendations of decision supports can influence both how physicians prepare medical documentation and how the causal link between disease and reduced work capacity is evaluated. How far-reaching these effects will be, however, remains yet to be seen.

Fluctuating Capacity Loss and Inflexible Benefit Criteria

Mental conditions vary not only in the degree to which they reduce work capacity, but also in how stable that reduction remains over time. Stress, sleep, treatment, and work environment can shift both the level and the persistence of impairment, and because recovery and rest are often integral to treatment, the actual extent of reduced capacity can be difficult to assess. This variability, however, clashes with the legal design of sickness benefits, which are paid per day and presume a constant degree of incapacity throughout the compensation period.[19] This static arrangement has long restricted the scope for flexible work scheduling in cases of part-time sick leave. Since the 1980s, only narrow exceptions were allowed.[20] A 2022 amendment has, however, expanded the scope for uneven distribution of working hours, now permitted if it does not hinder return to work and no longer requires medical prescription.[21] While this new possible exception does not allow full day-to-day flexibility, it adapts the system to conditions with dynamic impact and softens its temporal rigidity.

A further mitigating feature is that both actual incapacity (i.e. the disease directly prevents work) and therapeutic incapacity (i.e. work should be avoided to protect recovery) are compensable.[22] This adds some flexibility where impairment is hard to objectify, yet the evidentiary demand to specify and causally link disease to reduced capacity remains. The core problem of how the individual can substantiate entitlement under conditions of medical uncertainty is therefore unchanged.

Evidentiary Challenges

The tension between fluctuating illness trajectories, individual effects on working capacity, and a rigid benefit design is particularly apparent in questions of proof. The individual claiming sickness benefits must establish both the existence of the disease and the resulting loss of capacity, with the medical certificate serving as the principal, and often only, evidence (while more extensive documentation is typically required as the absence continues).[23] While the burden of proof has long been debated within the sickness-benefit system, the approach developed by the Swedish Social Insurance Agency in psychiatric cases has attracted particular criticism. Here, the Agency had started to combine the heavy reliance on the DFA chain with a practice of requiring so-called “objective” medical findings for psychiatric diagnoses, which made it especially difficult for claimants to substantiate their cases. As criticism intensified and centred on the fact that the Agency had, in effect, introduced a condition not grounded in the Social Insurance Code, the practice was abandoned in 2019.[24]

In 2023, the Swedish Supreme Administrative Court clarified that entitlement in psychiatric cases does not presuppose findings beyond the patient’s own account – thus rejecting any requirement of objective medical findings. Instead, the Court emphasised that the decisive element is the physician’s professional evaluation of that account and any accompanying observations. At the same time, however, it also signalled that evidentiary demands may increase as sick leave continues.[25] Taken together, the judgment can thus be read as drawing a line between the domains of medicine and law. The physician’s role is to provide a professional assessment, while the law determines its evidentiary value. Yet in practice, this division is less clear. The evaluation of medical evidence cannot be fully separated from medical assessment, since a meaningful evidentiary appraisal requires medical knowledge and interpretive capacity. This challenge is particularly acute in psychiatric cases, where the symptoms that define the diagnosis are often the very features that impair work capacity. The result is a persistent difficulty for the legal system, which must adjudicate claims in a space where medical uncertainty and legal evaluation are inseparably intertwined.

Conclusions

Work capacity assessments in psychiatric illness expose a structural weakness in the Swedish sickness-benefit system. While the Code distinguishes between “disease” and “reduced capacity,” in practice these criteria converge. Psychiatry rarely offers clear causal boundaries, while the law presumes that they can be neatly isolated and graded. Extra-legal instruments such as the DFA chain and decision-support tools have not solved this problem, but have been used in ways that risk transforming methodological aids into de facto legal standards, shifting the evidentiary burden onto the insured, and narrowing the scope for individualised assessment. In turn, such practices risk running counter to the principle that the insured is to be assessed in their existing condition, whereby the same diagnosis may affect work capacity differently for different persons.

Reforms such as the 2022 allowance for uneven work distribution illustrate adaptations of the framework that acknowledge variations in how reduced work capacity may manifest, yet much of it still reflects a static model ill-suited to fluctuating psychiatric conditions. The 2023 Supreme Administrative Court judgment also shows that, in the absence of clear biomarkers, the emphasis shifts toward the physician’s professional assessment of the individual’s account rather than a demand for objective findings. The evidentiary dilemma, however, still underscores the difficulty of keeping medical assessment and legal evaluation apart. Psychiatric symptoms are often the very features that impair work, making objective separation elusive. The legal framework applies equally to physical and mental illness, yet the demand for precise differentiation and causation is especially ill-suited where knowledge is incomplete and influences overlap. If internal guidelines are treated as binding norms, however, decisions risk becoming template-driven and the burdens of proof unreasonable in atypical cases.

The task of law is therefore not to resolve medical uncertainty, but to manage it. This requires clarifying which elements are legally relevant, setting proportionate evidentiary thresholds, and safeguarding space for clinical judgment. Only then can the system strike a balance between fairness to the insured and the legitimacy of legal decision-making.


* Lena Enqvist is an Associate Professor of Law at Umeå University, Sweden. This blog post is based on the article ”Att bedöma arbetsförmåga vid psykisk ohälsa – när det komplexa blir ännu mer komplext”, published in Nordisk socialrättslig tidskrift No 43-44 2025 p. 41-80. The article can be found open access on https://www.lawpub.se/artikel/10.53292/2d91fddb.d8d925b0

[1] Swedish Social Insurance Agency. Tema psykisk ohälsa, https://www.forsakringskassan.se/statistik-och-analys/tema-psykisk-ohalsa; Myndigheten för arbetsmiljökunskap. Riktlinjer för psykisk hälsa på arbetsplatsen, Version 2, 2024, p. 11.

[2] Danielsson, O. Psykiatri: När dagens diagnoser inte räcker till, Medicinsk Vetenskap, No 2, 2023; Official Government Report SOU 2021:6, p. 168 ff.

[3] Chapter 27, Sections 2 and 4 Swedish Social Insurance Code (2010:110) (Socialförsäkringsbalk), SFB.

[4] Chapter 27, Section 25 SFB.

[5] Official Government Report SOU 1944:15, p. 162; Government Bill Prop. 1994/95:147, p. 19 f.; Supreme Administrative Court RÅ 2009:102 (I och II) and Supreme Administrative Court HFD 2023 ref. 57; Vahlne Westerhäll, L. Thorpenberg, S. & Jonasson, M. Läkarintyget i sjukförsäkringsprocessen: styrning, legitimitet och bevisning, Santérus, 2009, p. 96 and 99.

[6] Official Government Report SOU 2009:89, p. 117 ff.

[7] Vahlne Westerhäll, L. Medicinska och försäkringsrättsliga sjukdoms- och arbetsförmågebegrepp – faktiska respektive normativa kriterier, in Dahlin et al (eds.) Festskrift till Elisabeth Rynning. Integritet och rättssäkerhet inom och bortom den medicinska rätten, Iustus, 2023, p 399.

[8] Chapter 27, Sections 46-49 SFB.

[9] Chapter 27, Section 47 SFB.

[10] Supreme Administrative Court RÅ 2008 ref. 15.

[11] Supreme Administrative Court HFD 2018 ref. 51.

[12] Official Government Report SOU 2009:89, p. 189 ff.; Swedish National Audit Office. Bedömning av arbetsförmåga vid psykisk ohälsa – en process med stora utmaningar, RiR 2018:11, 2018 (a), p. 22; Swedish Social Insurance Agency. Sjukpenning, rehabilitering och rehabiliteringsersättning, vägledning 2015:1, Version 19, 2024, p. 270.

[13] Swedish National Audit Office. 2018(a), p. 49 ff.

[14] Swedish Social Insurance Agency. Domsnytt 2019:014.

[15] Swedish National Board of Health and Welfare. Försäkringsmedicinskt beslutsstöd, https://forsakringsmedicin.socialstyrelsen.

se/beslutsstod-for-diagnoser/, see headline ”Så ska stödet användas”.

[16] Swedish National Audit Office. Försäkringsmedicinskt beslutsstöd – ett stöd för Försäkringskassan

vid psykisk ohälsa? RiR 2018:22, p. 55 ff.

[17] Swedish National Board of Health and Welfare. Frågor och svar: Utmattningssyndrom i ICD-11, https://www.socialstyrelsen.se/statistik-och-data/klassifikationer-och-koder/icd-11/.

[18] Ibid.

[19] Chapter 27, Sections 20 and 46(3) SFB.

[20] FÖD 1986:11; Supreme Administrative Court HFD 2011 ref. 30.

[21] Chapter 27 Section 46(3) SFB; Government Bill Prop. 2021/22:1, Utgiftsområde 10, p. 82.

[22] Government Bills Prop. 178/1953, p. 181 f. and 312/1946, p. 221; Official Government Report SOU 1944:15, p. 20.

[23] Eg. Government Bill Prop. 2002/03:89, p. 22 ff.; Official Government Report SOU 2023:48, p. 13 ff.; Vahlne Westerhäll, L, Thorpenberg, S. & Jonasson M. 2009.

[24] Swedish Social Insurance Agency. Domsnytt 2019:014.

[25] Supreme Administrative Court HFD 2022 ref. 47.

October 12, 2025

This entry was posted in

Posts Swedish Health Law

Comments

0 Comments Leave a comment

Bridging Law and Policy: Reflections on Working Inside a Government Inquiry

Moa Dahlin*

This summer, the Swedish Committee of Inquiry on Responsibility for Care (Vårdansvarskommittén) presented its final report to the national government. I had the privilege of serving in its Secretariat as legal adviser – an experience that gave me new insights into how law and policy meet in practice. In this blog, I will share my experience working on such a task. As a Swedish scholar, this role is quite different from the one we have as teachers and researchers at our universities.

I had previously contributed to inquiries as an expert and as part of reference groups, but working inside a secretariat was something else entirely. The daily rhythm was different: conducting background studies, attending stakeholder meetings, drafting texts, and then observing how those texts were adapted in the political process.

The Inquiry’s Task

The Committee’s mandate was ambitious: to assess whether Sweden should shift from regional to State responsibility (huvudmannaskap) for healthcare.

Sweden’s health system is known for high standards, but it also faces persistent challenges, including unequal access, waiting times, and regional disparities. Would shifting responsibility from the 21 regions to the State solve these problems or create new ones?

After extensive analysis, the Committee concluded that there was insufficient evidence to justify a full State takeover. Partial State responsibility was also rejected as a risk for further fragmentation. Instead, the Committee recommended stronger State governance within the existing structure, particularly in areas where regional variation is unjustified, such as pharmaceuticals, vaccinations, screening, workforce planning, forensic psychiatry, and air ambulance services.

Two Lawyers, Two Roles

One of the most rewarding aspects of the Secretariat work was sharing the legal responsibility with another lawyer, a full-time judge.

At first, I was struck by her capacity: the sheer speed with which she could absorb new material, the detail with which she mastered unfamiliar legal areas, and her ability to keep track of the fine print while drafting large sections of the report. Working alongside her made me reflect on my own role – and helped me see my strengths more clearly.

Unlike her, I was employed only part-time. My other duties at the university – teaching, research, and academic service – meant that I could not (and was not expected to) produce the same volume of detailed investigations. Instead, my contribution was to serve as legal adviser in a broader sense: identifying the key principles, helping the Secretariat see patterns across the material, and finding ways to present the legal terrain so that colleagues from other disciplines and the Committee’s politicians could navigate it.

Her strength lay in precision and speed of analysis. Mine was thematic framing and communication. Together, those roles complemented each other. The combination gave the Secretariat both depth and breadth – detail on the one hand, and a principled narrative on the other.

In hindsight, I realize that this duality was essential. It not only made the legal work stronger, but it also reminded me that lawyers bring different kinds of expertise depending on background, employment terms, and professional culture. Recognizing and valuing those differences was one of the most important lessons I took with me from the inquiry.

Law Meets Other Disciplines

Another learning experience was the way lawyers and non-lawyers approached the task differently.

We lawyers tended to map responsibilities carefully: Who is accountable – the State, the regions, the providers, the individual professionals? Which supervisory authorities exist, and what sanctions can they impose? What are the constitutional dimensions of the State–region relationship?

Our colleagues approached the same problems in a much more open manner. They held countless meetings with regional politicians and directors, asking how they understood mandatorship and how the Swedish regions govern healthcare in practice. They studied reforms in the Nordic countries and overall, they thought more freely about what “governance” could mean.

This combination – their open, reality-based, forward-looking perspective and our law-based approach – produced rich discussions. Out of these exchanges, we developed four concepts of responsibility in health care: system responsibility, financing responsibility, provision responsibility, and operational responsibility. These categories helped us, and the Committee, to elaborate on the current system and different reforms in a wider way than would have been possible if we had just used the narrow legal definition of huvudmannaskap.

I am convinced that we would not have arrived at this analytical framework without our mutual respect for each other’s methods and our willingness to listen across professional cultures.

What Legal Scholars Bring

From this experience, I take with me a renewed conviction that legal scholars have much to contribute inside government inquiries – not only from the outside as commentators.

• We bring structure. We notice unclear concepts, gaps in logic, inconsistencies.

• We think in principles. We can step back and ask: What values are at stake? How does this align with constitutional design and patient rights?

• We work with text. We know that how a problem is described shapes how solutions are imagined.

This does not mean that lawyers decide politics, far from it. Political members of the Committee made the final judgments. But our analysis and clarifications informed those decisions.

That is, I think, the proper way to describe the role: lawyers do not dictate outcomes, but we help ensure that when political decisions are made, they are grounded in clear concepts, principled reasoning, and coherent legal frameworks.

Lessons for the Future

My strongest takeaway is that more legal scholars should consider working inside inquiry committees. Many of the challenges in healthcare – equity, efficiency, legitimacy – are not only political but legal. They touch on constitutional design, administrative responsibility, and the protection of patient rights.

Lawyers should not shy away from these debates. By engaging directly, we can help make reforms more principled, more precise, and ultimately more legitimate.

Bridging law and policy is not always easy – the process can be messy and sometimes frustrating – but it is also inspiring. And I believe it is necessary if we want health care reforms to meet both political and legal standards.

This post is part of my ongoing reflections on law, health policy, and the role of legal scholars in public decision-making. Comments and discussion are warmly welcome.


* Moa Dahlin is an Associate Professor in Public Law at Uppsala University.

September 4, 2025

This entry was posted in

Posts Swedish Health Law

Comments

2 Comments Leave a comment

Is the European Health Data Space Regulation the Odd One Out? Ensuring Data Protection through Product Safety Legislation

Photo by davisuko on Unsplash
Photo by davisuko on Unsplash https://unsplash.com/photos/blue-lemon-sliced-into-two-halves-5E5N49RWtbA

Sarah de Heer*

Introduction

The fragmented legal landscape of healthcare data has long been a challenge to sharing this type of data and to receiving cross-border healthcare.[1]  To address this challenge, the European Health Data Space Regulation (EHDS Regulation) was adopted. While the EHDS Regulation entered into force in March 2025,[2] its implementation will be gradual, with full implementation planned in March 2035.[3] The legislative goals show the two interests that lie at the heart of the EHDS Regulation. The first goal is ensuring the right to the protection of electronic personal health data, while the second is the smooth functioning of the internal market of electronic health record systems (EHR systems) to facilitate secondary use of electronic personal health data.[4] Data protection rules aim to achieve the former objective, while product safety legislation seeks to achieve the latter goal.

The purpose of this contribution is to explore how the EHDS Regulation combines data protection with product safety legislation.

Protecting the Right to Electronic Personal Health Data

The EHDS Regulation aims to protect fundamental rights, and more specifically the right to the protection of personal data. As such, the EHDS Regulation forms part of EU data protection legislation, which also includes, amongst others, the General Data Protection Regulation (GDPR).[5] The EHDS Regulation complements the application of the GDPR in the healthcare sector as regards personal electronic health data,[6] thereby tailoring GDPR rights to fit the healthcare sector. Individual rights outlined in the EHDS Regulation serve as a means to achieve the legislative objective of enhancing the individual’s control over their electronic personal health data.

The EHDS Regulation specifically discusses two types of these altered data subject rights, namely 1) the rights of access, and 2) the right to data portability.

While the rights of access are already well-established under the GDPR, they may not fit the healthcare context. For instance, the right to access under the GDPR allows the data controller to take a month to decide on a data subject’s request to access their personal data.[7] In the context of healthcare, this delayed response may negatively impact the individual’s health.[8] The right to access under the EHDS Regulation aims to remove this potentially harmful effect by providing immediate access to individuals to their personal electronic health data that has been included in the EHR system.[9] To prevent the controller of electronic health data from being overburdened by this obligation, the EHDS Regulation restricts this right to a specific type of personal health information,[10] namely the so-called ‘priority categories’, which include patient summaries and electronic prescriptions.[11] Other examples of these access rights are the individual’s right to rectify health data included in their EHR[12] and their right to restrict access to their electronic health data.[13]

The right to data portability is also cemented in the GDPR.[14] However, this right to data portability is restricted to personal data that is provided by the data subject themselves and that is processed based on the data subject’s consent or a contract between the data subject and the controller of electronic health data.[15] Under the EHDS Regulation, the right to data portability gives individuals the right to provide access to their data,[16] and to exchange their data with healthcare professionals and the right to download their data.[17] Natural persons are to exercise their right to data portability free of charge.

In addition to these individuals’ rights, the EHDS Regulation lays down rules on the use of patient’s electronic health data by healthcare professionals[18] and on the secondary use of this type of data.[19]  Secondary use entails that certain electronic health data are used by other people for other purposes than the provision healthcare services, including the public interest of public health and occupational health, and scientific research in the health or care sector.[20] As such, the EHDS Regulation aims to protect electronic health data in a threefold manner. The first two categories, namely 1) individual rights, and  2) the use of electronic health data by healthcare professionals, can be grouped under the primary use of electronic health data. The third category is the secondary use of electronic health data.

The Internal Market of Electronic Health Record Systems

The EHDS Regulation establishes a uniform EHR system as regards the interoperability software, and logging software.[21] This uniformity is aimed at ensuring that individuals can effectively rely on their rights of access and their right to data portability.[22] Furthermore, uniform EHR systems facilitate the use of electronic health data by healthcare professionals and data sharing for secondary use.[23]

To ensure uniformity in the internal market of products and services, the European Legislator may use product safety legislation under the New Legislative Framework.[24] This type of product safety legislation encompasses two main procedures for verifying the quality of products or services, namely the conformity assessment procedure and market surveillance.

The conformity assessment procedure is partially implemented in the EHDS Regulation. Manufacturers of EHR systems need to draw up the EU Declaration of Conformity and affix the EHR system with the CE marking of conformity before placing it on the internal market of the European Union.[25] However, although the EHDS Regulation requires EHR systems to be in conformity with essential requirements and common specifications,[26] it does not establish a novel conformity assessment procedure.[27] The EHDS Regulation also introduces market surveillance over EHR systems.[28] Once the EHR systems are placed on the market, the market surveillance authority is to monitor the continued conformity with the essential requirements and common specifications. Concretely, these essential requirements and common specifications aim to safeguard the protection of personal data.[29] Furthermore, market surveillance authorities are to evaluate EHR systems that pose a risk to the health, safety or rights of individuals or to the protection of personal data.[30] Both the conformity assessment procedure and market surveillance, as components of product safety legislation, ensure compliance with essential requirements and common specifications, which thus means that the essential requirements and common specifications are aimed at safeguarding the protection of electronic health data.

Nothing New – Fundamental Rights Protection Through Product Safety Legislation

While there has previously been a link between product safety legislation and fundamental rights protection, this has been more subtle in the past. For instance, the Medical Devices Regulation[31] respects fundamental rights as mentioned in the EU Charter, and specifically human dignity, the integrity of the person, the protection of personal data, the freedom of art and science, the freedom to conduct business, the right to property, and the freedom of the press.[32] The Batteries Regulation[33] requires consideration of human rights in due diligence policies[34] and to identify and assess the risks of negatively impacting human rights,[35] specifically rights surrounding occupational health and safety[36] and discrimination.[37] Nevertheless, it is evident that these two legislative instruments – as product safety legislation – fall under the New Legislative Framework.[38] This approach can also be seen in the Artificial Intelligence Act[39] that aims to protect fundamental rights through, for instance, the use of the classification of AI systems based on the risk posed to fundamental rights[40] the fundamental rights impact assessment.[41] Again, the Artificial Intelligence Act is, at its essence, product safety legislation that originates from the New Legislative Framework, which aims to protect fundamental rights. 

This, however, cannot be said about the EHDS Regulation. Although the EHDS Regulation protects electronic health data through the use of product safety legislation under the New Legislative Framework, the EHDS Regulation does not appear to be based on this Framework. As such, the EHDS Regulation is hybrid legislation situated between data protection legislation and product safety legislation.

The Odd One Out – Data Protection Legislation and Elements of Product Safety Legislation

To answer the question in the title – if the EHDS is the odd one out – the following should be stated. While the EHDS Regulation is not the first legislative instrument aimed at protecting fundamental rights through the use of product safety legislation, the EHDS Regulation has brought fundamental rights protection to a higher level, which makes the EHDS Regulation – together with the Artificial Intelligence Act – the odd one out amongst EU product safety legislation. An explanation for this hybrid model could be that a uniform EHR system is a prerequisite for the effective enjoyment of the individual’s rights. If the European legislator would not have harmonised the EHR systems, different standards and requirements may have remained throughout the Member States, thereby hindering patient’s control over their health data.

There seems to be a trend to use product safety legislation to protect fundamental rights, which means that we may not have seen the last of this hybrid legislation. However, the adoption of this type of legislation may bring doubts as to the effectiveness of ensuring fundamental rights through product safety legislation. While fundamental rights legislation and product safety legislation aim to mitigate risks either to fundamental rights or to products, respectively, they both have different approaches. The method of fundamental rights legislation is to use a proportionality assessment to determine potential violations in a certain context, which thus makes the outcome of the proportionality assessment highly contextual. Yet, product safety legislation uses a binary assessment: either the product fulfils the product safety requirements or it does not.[42] This ‘either-or’ approach of product safety legislation may not be able to account for the refined method used in fundamental rights legislation. As such, product safety legislation may not be able to consider the complex nature of the protection of fundamental rights.[43]


* Sarah de Heer is a Doctoral Candidate at the Faculty of Law, Lund University.

[1] European Commission, Proposal for a Regulation of the European Parliament and of the Council on the European Health Data Space COM(2022) 197 final, 7.

[2] Article 105, paragraph 1 EHDS Regulation.

[3] Article 105, paragraphs 2-7 EHDS Regulation.

[4] These two legislative objectives are also reflected in the legal basis upon which the EHDS Regulation is based, namely Article 16 Treaty on the Functioning of the European Union (TFEU) that embeds the right to the protection of personal data and Article 114 TFEU that aims to facilitate the smooth functioning of the internal market, see the Preamble EHDS Regulation.

[5] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC [2016] OJ L 119/1.

[6] Recital 8 EHDS Regulation.

[7] Article 12(3) and (4) GDPR.

[8] Recital 9 EHDS Regulation.

[9] Article 3(1) EHDS Regulation.

[10] Article 2(2)(a) EHDS Regulation.

[11] Articles 3(1) and 14(1) EHDS Regulation.

[12] Article 6 EHDS Regulation.

[13] Article 8 EHDS Regulation.

[14] Article 20 GDPR.

[15] Article 20(1) GDPR and Recital 14 EHDS Regulation.

[16] Article 7(1) EHDS Regulation.

[17] Articles 3(2) and 7(4) EHDS Regulation, see also Recital 14 EHDS Regulation. Additionally, natural persons have the right to request their personal electronic health data to be transmitted to the social security or reimbursement services sector, see Article 7(3) EHDS Regulation.

[18] Articles 11 and 12 EHDS Regulation.

[19] Chapter IV EHDS Regulation.

[20] The EHDS Regulation lists minimum categories of electronic health data that may be used for the purpose of secondary use, see Article 51 EHDS Regulation. See Article 53(1)(a) and (e) EHDS Regulation.

[21] Article 1(2)(b) EHDS Regulation. See also: Article 25(1) EHDS Regulation.

[22] European Commission, Proposal for a Regulation of the European Parliament and of the Council on the European Health Data Space COM(2022) 197 final, 4.

[23] European Commission, Proposal for a Regulation of the European Parliament and of the Council on the European Health Data Space COM(2022) 197 final, 2.

[24] For more information about the New Legislative Framework, please see https://single-market-economy.ec.europa.eu/single-market/goods/new-legislative-framework_en

[25] Article 30(1)(e) and (f) EHDS Regulation.

[26] Article 30(1)(a) EHDS Regulation.

[27] Nevertheless, where the EHR system also falls within the scope of a medical devices/an in vitro diagnostic medical devices or an AI system, the conformity assessment procedure under the respective legislative instruments should also consider the requirements under the EHDS Regulation. Combining these two administrative procedures is aimed at limiting the administrative burden on the manufacturers of EHR systems, see Recital 42 EHDS Regulation.

[28] Article 30(1)(l) and (m) EHDS Regulation and Article 37(2) EHDS Regulation. The market surveillance authorities for EHR systems included in medical devices, in vitro diagnostic medical devices or high-risk AI systems will be those assigned in the respective Regulation, see Article 43(4) EHDS Regulation.

[29] Recital 46 EHDS Regulation.

[30] Article 44 EHDS Regulation.

[31] Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC [2017] OJ L 117/1.

[32] Recital 89 Medical Devices Regulation; Article 1(16) Medical Devices Regulation.

[33] Regulation (EU) 2023/1542 of the European Parliament and of the Council of 12 July 2023 concerning batteries and waste batteries, amending Directive 2008/98/EC and Regulation (EU) 2019/1020 and repealing Directive 2006/66/EC [2023] OJ L 191/1.

[34] Recitals 86 and 87 Battery Regulation.

[35] Article 50(1)(a) Battery Regulation.

[36] Point (b)(i) Annex X Battery Regulation.

[37] Point (b)(iv) Annex X Battery Regulation.

[38] See also: Recital 25 Medical Devices Regulation.

[39] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 [2024] OJ L 1/144.

[40] Recital 48 Artificial Intelligence Act.

[41] Article 27 Artificial Intelligence Act.

[42] For more information, please see Marco Almada and Nicolas Petit, ‘The EU AI Act: Between the Rock of Product Safety and the Hard Place of Fundamental Rights?’ (2025) Common Market Law Review 85, 105-106.

[43] Similarly, Gornet and Maxwell argue that standards, which form an integral part of product safety legislation, were not originally designed to protect fundamental rights, see Mélanie Gornet and Winston Maxwell, ‘The European approach to regulating AI through technical standards’ (2024) 13 Internet Policy Review 1, 9-10.

This entry was posted in

Posts Swedish Health Law

Comments

0 Comments Leave a comment


Their Best Intentions Don’t Mean Much – Sale of Your Genetic Data was Always on the Cards

Andelka M. Phillips*

At the opening of the Senate Committee on the Judiciary hearing which commenced on 11 June 2025 [1] Senator Chuck Grassley, Chairman of the Committee stated:

“Genetic data is the blueprint to a person. It is sensitive, it is personal and in the wrong hands, it is dangerous.” He went on to say that “Data is a weapon, and genetic data is a particularly potent weapon.”[2]

Introduction

Our personal data is everywhere online. From online dating to buying your groceries, our lives are increasingly being made public. Our lives are in a sense performed publicly whether we are aware of this or not. As well as sharing data both publicly and privately, we often share our most sensitive data with companies marketing new services. This is particularly common in the context of the HealthTech space. A prime example of this is direct-to-consumer genetic testing (aka DTC or personal genomics) which has created a market for DNA tests in the consumer space. It has done this sitting outside traditional governance frameworks that apply to DNA tests conducted in a clinical setting. This does involve an element of trust. We trust that these businesses will protect our most sensitive data, but is this trust misplaced? My answer to this question is yes.

But just where is all this data going? How is it being used and what happens when something goes wrong? And this final question is a matter of when, rather than if, as no business can guarantee security of information and so many industries are experiencing large data breaches.

As well as using popular wearable tech to track many different aspects of health and wellness, or taking online fitness classes, specific industries have developed centred around particular types of sensitive data. The consumer genomics industry relies on harvesting consumers’ genetic data together with troves of other forms of their personal data. As I have previously written, this industry has remained largely self-regulated and the main means of industry governance has been the contracts and privacy policies of DTC companies.[3] The contracts can, in fact, be seen as a form of private legislation imposed on consumers.[4] As is mentioned in the Senate Committee on the Judiciary hearing, which is referenced further later in this piece, the reality for most consumers is that they fail to read or even notice both contracts and privacy policies in the online world. In 2016, the Norwegian Consumer Council estimated that the average smartphone contained 250,000 words of terms and conditions.[5] In another example, CHOICE in Australia found that it would take 9 hours just to read Kindle’s terms and conditions.[6] More recently, in 2023, Nord Security estimated that it would take “46.6 hours … to read the privacy policies of the 96 websites Americans typically visit monthly.”[7] Unfortunately, even where consumers do choose to read contracts and policies, it is likely that they may face challenges in understanding the meaning of the terms contained in these documents. Becher and Benoliel’s study of the readability of the 500 most popular US website terms and conditions found that most of these contracts were written at the same level as academic journal articles and would generally require “more than 14 years of education” to understand.[8] And in my work with Becher, we found that it was possible to get through to the payment screen on several DTC websites, without ever viewing the contract or privacy policy.[9]

This piece is linked to two previous blog posts for this series: In safe hands? The protection of privacy in consumer genomics; and Hacking your DNA? Some things to consider before buying a DNA test online. In this follow up, I consider recent developments related to 23andMe’s bankruptcy proceedings and impending sale of the company. Previously, the winning bidder on the company was pharma giant Regeneron[10] and the backup bidder[11] TTAM Research Institute. TTAM is a nonprofit medical research organisation founded by 23andMe’s former CEO and co-founder Anne Wojcicki.[12] More recently, in a second auction, TTAM has won the auction for $305 million US dollars, with the company announcing “that it has entered into a definitive agreement with former CEO Anne Wojcicki’s TTAM Research Institute for the sale of substantially all the company’s assets…”[13] This new agreement includes provision that TTAM will abide by the company’s existing privacy policies and allow for account deletion. However, the previous history of the company, including its data breach, the subsequent class actions, the Bankruptcy proceedings, and the US Senate Committee on the Judiciary hearing which commenced in the last week (mentioned further below[14]), together with the lawsuit brought by 27 state attorneys-general and the District of Columbia,[15] means that many things for 23andMe and their consumers are in a state of flux. The sale to TTAM will still require approval from the Bankruptcy Court, and the hearing and lawsuit by the attorneys-general, and a newly proposed Bill for the Don’t Sell My DNA Act[16] could impact this sale. I also believe that if the purchase goes ahead, consumers should still be concerned.

I wish to highlight why selling the company to pharma or another entity should not come as a surprise and make a renewed call for improved oversight of this industry. This is an industry which from its beginnings has had sharing and reuse of data at its heart, rather than a focus on protecting the security and privacy of consumers’ data. This sharing comes in many forms. It is not only about partnerships and mergers that a company might enter into, it is also about encouraging consumers to connect with unknown relatives and share other forms of data through a company’s platform. There is value in having a large database and it is not just the digital genetic data that has value, but also the physical samples of spit collected from consumers. More attention also needs to be paid to what is happening to physical samples of saliva that have been stored by the company. As proceedings in the US are ongoing, I am planning further work on this.

23andMe is a market leading DTC genetics company, possessing one of the largest consumer databases with approximately 15 million consumers’ data. Since its beginnings in 2006,[17] it has been one of the best-known companies in this space. A once unicorn company, valued at a market cap of more than $1 billion USD in 2015[18] and later being valued at $6 billion USD.[19] It has also had links with Big Tech since its inception. Its co-founder Anne Wojcicki and former CEO was formerly married to Sergei Brin who was the co-founder of Google and Google has invested in the company (with investments of $3.9 million USD in 2007 and $2.6 million USD in 2009).[20] Wojcicki resigned from her position as CEO in March 2025.[21] It should also be noted that Wojcicki’s sister is also the former CEO of YouTube.[22] This is also not the only player in the DTC space that Google has had links with. Another example is Google’s subsidiary Calico’s collaboration with AncestryDNA.[23] I plan to explore more of the competition law issues raised by this industry in future work.

The 23andMe data breach

The earlier blogs mentioned the massive data breach experienced by 23andMe in 2023, which has impacted almost half of its consumers – some 6.9 million people, including children, and the subsequent class actions that have followed this breach. Unfortunately, since the provisional approval of a $30 million (USD) settlement in December 2024, the situation has deteriorated further.[24] In March of 2025, the company filed for Chapter 11 Bankruptcy protection.[25] Then in May 2025, an agreement was reached to sell the company, including consumers’ data to Regeneron,[26] which is a leading pharmaceutical company. More recently in the second auction, the agreement 23andMe has reached with TTAM nullifies the earlier deal with Regeneron.[27] In the aftermath of the earlier data breach and the more recent news of the bankruptcy, many consumers have tried to delete their data, but there have been problems with doing this. This has led to the US House Committee on Energy and Commerce launching an investigation of how the bankruptcy will impact consumers’ data.[28] This has in turn led to a US Senate Committee on the Judiciary hearing commencing on 11th June 2025 – more on this below.

Key points to keep in mind:

Before continuing, there are four points that should be emphasized. Firstly, the breach that impacted 23andMe should not be viewed as limited to impacting the 6.9 million people whose data was compromised, but also their wider family group. This is because of the shared nature of DNA – it means that many more millions of people could be impacted over the longer term by this breach.

Secondly, while a settlement had been provisionally approved for the US class actions, it is now likely that consumers may not end up receiving any compensation from this settlement. Compensation at an individual level was always going to be limited under the terms of the settlement, but now the settlement itself has been put on hold and is being challenged by 23andMe’s lawyers.[29] This should further highlight the reality that victims of data breaches often are left in the cold with very limited options for redress and this is something that needs to change. While most peoples’ lives have moved increasingly online in the last two decades, the future risks that many services collecting our most sensitive data pose need to be taken more seriously.

Thirdly, a sale of 23andMe including its entire database will pose risks not only for all its 15 million consumers, but the millions of people whom they are related to.

Finally, it is common for the contracts and privacy policies to contain problematic clauses, which could be challengeable as unfair terms and raise questions about the validity of consent in the context of consumer genomics, but also other HealthTech industries. It is particularly common for companies to allow themselves broad power to change their terms. This is not unique to 23andMe, but common practice in this industry and it is an area in need of reform.

Recent developments in the USA

Now turning to recent developments, the future of 23andMe’s database and how its consumers’ data will be used is currently hanging in the balance. The company is based in the US and in light of the publicity around the breach and 23andMe’s financial problems, the US Senate and Congress are now taking an interest in 23andMe’s future. The Bankruptcy Court has also appointed a privacy ombudsman to 23andMe, who “will investigate and report to the court on the security program of the buyer, the potential costs and benefits of the sale to customers, and whether the sale is consistent with 23andMe’s privacy policies and applicable laws.”[30]

This was followed by a number of very recent developments. On the 9th of June attorneys-general from 27 US States together with the District of Columbia sued the company in order to prevent the sale of their States’ consumers’ data without consumers’ consent.[31] This is a bipartisan initiative. Then on 10th June 2025, the US House Committee on Oversight and Government Reform held a hearing entitled ‘Securing Americans’ Genetic Information: Privacy and National Security Concerns Surrounding 23andMe’s Bankruptcy Sale’.[32]

Subsequently, on the 11th June of 2025, a US Senate Committee on the Judiciary hearing entitled ‘23 and You: The Privacy and National Security Implications of the 23andMe Bankruptcy’ began.[33] In these proceedings, concerns have been raised by a range of people from different sides of the political spectrum with testimony from Professor I Glenn Cohen and Professor Brook Gotberg.

Senator and Ranking Member Richard Durbin expressed his concerns about future uses of consumers’ data not being in line with its current policies to Joseph Selsavage (the Interim Chief Executive Officer and Chief Financial and Accounting Officer of 23andMe), in these words:

“2 or 3 buyers removed – your best intentions don’t mean much”

and that “unless we had a federal law relative to this issue that applies to future transactions your best intentions don’t mean much.”[34]

Senator Durbin also mentioned the example of the HeLa cell line developed because of samples of a tumour from Henrietta Lacks and emphasized the lack of consent or compensation provided to her and her family and made this comment:

“Part of what’s being sold by 23andMe is a collection of biological samples submitted by consumers who wanted their DNA examined. They may have consented to some use of those samples, but I question how informed it actually was. And there’s no guarantee a new owner won’t change how those samples are used…”[35]

In response to this, Professor Cohen in his testimony also emphasized that 23andMe has not explained why they are not recontacting their consumers to provide consent to the transfer of their data, asking “why aren’t they doing it?”[36]

Senator Josh Hawley also raised concerns about 23andMe’s Privacy Policy referencing specific sections of the policy. It was noted that the Policy indicates that data can be retained by the company even after a consumer has asked for it to be deleted. This is not surprising. This is in line with my own research on the industry’s contractual terms over the last decade. In my review of 71 companies’ contracts, I found it very common for companies to leave themselves broad power to change terms, often without notice.[37] Furthermore, 23andMe’s policies have for years allowed sharing data with affiliates, which could include all their previous partners.[38]

A point to remember here is that while the concerns raised in the USA can be seen as positive steps for widening the debate about the protection of privacy in relation to genetic data specifically and sensitive data more generally, this is not a matter that only impacts American consumers. 23andMe has sold its tests internationally and there are privacy risks for its international consumers on a global scale. It should also be understood that even though it appears the majority of 23andMe’s customers are American this should not encourage complacency, as many Americans have close relatives in other countries. Likewise, if 23andMe’s bankruptcy and data breach raises national security concerns for Americans, it also raises national security concerns for citizens of other nations.

One further development, which could lead to positive regulatory reform in the USA, is a new bipartisan Bill for a Don’t Sell My DNA Act.[39] This legislation, if enacted, would reform the US Bankruptcy Code. It would improve protection for consumers’ privacy in the USA in three main ways:[40]

  • “Modernizing the Bankruptcy Code to include genetic information in the definition of “personally identifiable information”;
  • Requiring written notice and affirmative consumer consent prior to the use, sale or lease of genetic information during bankruptcy proceedings; and
  • Requiring the trustee or debtor in possession of genetic information to permanently delete any data not subject to a sale or lease.”

I have previously suggested the need for industry specific legislation and I believe other amendments to existing law are necessary, but this Bill could lead to reform at least in the context of businesses that do have financial difficulties and face the prospect of being sold on. Given that much of this industry is based in the USA as well, there is a vital need for real reform in the USA.

Conclusion

Now is the time for reform! As has been mentioned in the ‘23 and You’ hearing genetic information can be used for a wide range of purposes, which may be against the interests of consumers. This is of course an area of future risk, but some of this unidentified future risk has already become a reality for some of 23andMe’s customers, who have been victims of identity theft or had their health information compromised. Other potential risks which were highlighted in the hearing are the ability to track and locate individuals and their relatives and the potential to use data to train AI models. Such uses are not far fetched. The use of Generative AI is expanding in all fields and there is growing interest in Generative Biology projects together with legitimate concerns about its risks.[41] Consumers need and deserve better protection.

I have previously written about the need for improved regulation of this industry both independently and jointly with others[42]. I am hopeful that these proceedings and the Bill will lead to some substantive reforms, but this is very much a case of too little, too late. We need new legislation in the US to regulate the industry and we need existing regulators to contribute to reform in this area. We need international collaboration to improve industry standards and specifically to improve cyber security practices in relation to genetic data and other forms of sensitive data more generally.

Mandatory codes of conduct, as well as user friendly model privacy policies and contracts for the industry would also be beneficial. Model privacy policies and contracts could be developed by existing regulators (both in the consumer protection and data protection spheres) which limit the ways that data can be used and allow consumers more control over their most sensitive data. In the US, the Federal Trade Commission (FTC) and the Food and Drug Administration (FDA) could contribute to reform and the scope of legislation such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA) could be expanded. I do believe that specific legislation applicable to all forms of consumer genomics would be beneficial though, as at present ancestry testing and other so-called ‘recreational’ testing often sits outside existing legislation.

Particularly problematic clauses that have been deemed unfair in jurisdictions including the European Union, the United Kingdom, New Zealand and Australia, should not be included in contracts targeting consumers in those jurisdictions. Furthermore, such clauses should be removed from American consumer contracts if we are to improve the protection of consumers’ rights in this context. Enhancing consumers’ rights to their data in this context, such as with a consumer data right would also be welcome, but it is vital that we move towards allowing consumers opportunities to understand risks and benefits in this context and the ability to make informed choices. We need companies to be held accountable, so that consumers are not left without recourse when a data breach occurs. I will end with a final point. While medical research has brought us many benefits, it, like technology itself is not neutral. Finally, not all research ventures will be beneficial to our most vulnerable communities, who have in fact often been exploited with no recompense.

As this article goes live, the news has also broken that the UK’s Information Commissioner’s Office (ICO) has announced that it is fining 23andMe “£2.31 million for failing to implement appropriate security measures to protect the personal information of UK users” in the attack it experienced in 2023, which led to the data breach.[43] This follows a joint probe by the ICO and the Canadian Office of the Privacy Commissioner (OPC). In the News statement they have released, the ICO states that “23andMe revealed serious security failings at the time of the 2023 data breach.”[44] This lends support to the need for reform of security infrastructure and practices throughout the industry.

Furthermore, according to the ICO, the breach has impacted “155,592 UK residents, potentially revealing names, birth years, self-reported city or postcode-level location, profile images, race, ethnicity, family trees and health reports.” Again, as previously noted the number of people impacted is likely to substantially exceed this figure, given that this information can link to a larger number of family members. The ICO highlights that the impacts on consumers could include surveillance, discrimination or financial loss and that they “received 12 complaints from consumers”.[45]  I plan to write further about this together with the US developments, but am adding this here for the benefit of readers to keep this as current as possible.


* Dr Andelka M. Phillips is an Academic Affiliate, Centre for Health, Law and Emerging Technologies (HeLEX), University of Oxford and Affiliate with the Bioethics Institute Ghent (BIG), Ghent University. https://www.andelkamphillips.com, https://www.law.ox.ac.uk/people/andelka-phillips, https://www.bioethics.ugent.be/our-people/andelkamphillips/

[1] US Senate Committee on the Judiciary, ‘23 and You: The Privacy and National Security Implications of the 23andMe Bankruptcy’ – full committee hearing recording available here https://www.judiciary.senate.gov/committee-activity/hearings/23-and-you-the-privacy-and-national-security-implications-of-the-23andme-bankruptcy.

[2] US Senate Committee on the Judiciary, ‘Grassley Opens Judiciary Hearing on the Privacy and National Security Implications of 23andMe Bankruptcy’ Prepared Opening Statement by Senator Chuck Grassley of Iowa, Chairman, Senate Judiciary Committee, ‘23 and You: The Privacy and National Security Implications of the 23andMe Bankruptcy’ (11 June, 2025) https://www.judiciary.senate.gov/press/rep/releases/grassley-opens-judiciary-hearing-on-the-privacy-and-national-security-implications-of-23andme-bankruptcy.

[3] AM Phillips, Buying Your Self on the Internet: Wrap Contracts and Personal Genomics (Edinburgh University Press 2019); AM Phillips, ‘Reading the Fine Print When Buying Your Genetic Self Online: Direct-to-Consumer Genetic Testing Terms and Conditions’ (2017) New Genetics and Society 36(3) 273-295. http://dx.doi.org/10.1080/14636778.2017.1352468; AM Phillips, ‘Only a Click Away – DTC Genetics for Ancestry, Health, Love… and More: A View of the Business and Regulatory Landscape’ (2016) 8 Applied & Translational Genomics 16-22; and SI Becher and AM Phillips, ‘Data Rights and Consumer Contracts: The Case of Personal Genomic Services’ in D Clifford, KH Lau, JM Paterson (eds), Data Rights and Private Law (Hart Publishing, 14 December 2023). Earlier draft available at SSRN: https://ssrn.com/abstract=4180967; and forthcoming AM Phillips, ‘Owning me, owning you – How private companies acquire rights in our most intimate data’ in for G Reynolds, A Mogyoros, and T Dagne (eds),Intellectual Property Futures – Exploring the Global Landscape of IP Law and Policy (University of Ottawa Press 2025).

[4] AM Phillips, Buying Your Self on the Internet: Wrap Contracts and Personal Genomics (Edinburgh University Press 2019) p28.

[5] Norwegian Consumer Council, ‘250,000 words of app terms and conditions’ (24 May 2016) https://www.forbrukerradet.no/side/250000-words-of-app-terms-and-conditions/; and see the AppFail campaign page https://www.forbrukerradet.no/appfail-en/.

[6] Consumers’ Federation of Australia, ‘Nine Hours of Conditions Apply *’ (16 March 2017) https://consumersfederation.org.au/nine-hours-of-conditions-apply/.

[7] Nord Security, ‘Reading the privacy policies they encounter monthly would take almost 47 hours’ (13 December 2023) https://nordsecurity.com/press-area/research-americans-would-waste-a-whole-workweek-every-month-if-they-were-to-read-privacy-policies – this is referenced in the Senate Committee on the Judiciary hearing.

[8] S Becher, ‘Research shows most online consumer contracts are incomprehensible, but still legally binding’ The Conversation (4 February 2019) https://theconversation.com/research-shows-most-online-consumer-contracts-are-incomprehensible-but-still-legally-binding-110793; and U Benoliel and SI Becher, ‘The Duty to Read the Unreadable’ (January 11, 2019) 60 Boston College Law Review 2255 (2019), Available at SSRN: https://ssrn.com/abstract=3313837  or http://dx.doi.org/10.2139/ssrn.3313837.

[9] SI Becher and AM Phillips, ‘Data Rights and Consumer Contracts: The Case of Personal Genomic Services’ in D Clifford, KH Lau, JM Paterson (eds), Data Rights and Private Law (Hart Publishing, 14 December 2023).

[10] Regeneron https://www.regeneron.com/ ; M Liebergall, ‘Pharma co. buys 23andMe and its DNA vault for $256 million’ Morning Brew (20 May 2025) https://www.morningbrew.com/stories/2025/05/20/pharma-co-buys-23andme-for-256-million ; R Winkler, ‘23andMe’s Fall From $6 Billion to Nearly $0’ The Wall Street Journal (31 January 2024). https://www.wsj.com/health/healthcare/23andme-anne-wojcicki-healthcare-stock-913468f4.

[11] Rylee Kirk, ‘23andMe Customers Did Not Expect Their DNA Data Would Be Sold, Lawsuit Claims’ The New York Times (10th June 2025) https://www.nytimes.com/2025/06/10/business/23andme-data-lawsuit.html#:~:text=The%20genetic%2Dtesting%20company%2C%20which,the%20data%20without%20express%20consent; NAAG Client States et al v. 23andMe Holding Co. et al, Case No. 25-04035, United States Bankruptcy Court for the Eastern District of Missouri, Eastern Division https://www.doj.state.or.us/wp-content/uploads/2025/06/Dkt-1-Complaint.pdf

[12] TTAM Research Institute https://ttamresearchinstitute.org/.

[13] Staff Reporter, ‘Wojcicki, TTAM Research Institute’s $305M Offer Wins Bidding for 23andMe in Second Auction’ GenomeWeb (13 June 2025) https://www.genomeweb.com/business-news/wojcicki-ttam-research-institutes-305m-offer-wins-bidding-23andme-second-auction.

[14] US Senate Committee on the Judiciary, ‘23 and You: The Privacy and National Security Implications of the 23andMe Bankruptcy’ – full committee hearing recording available here https://www.judiciary.senate.gov/committee-activity/hearings/23-and-you-the-privacy-and-national-security-implications-of-the-23andme-bankruptcy.

[15] Rylee Kirk, ‘23andMe Customers Did Not Expect Their DNA Data Would Be Sold, Lawsuit Claims’ The New York Times (10 June 2025) https://www.nytimes.com/2025/06/10/business/23andme-data-lawsuit.html#:~:text=The%20genetic%2Dtesting%20company%2C%20which,the%20data%20without%20express%20consent. ; Case 25-04035  https://www.doj.state.or.us/wp-content/uploads/2025/06/Dkt-1-Complaint.pdf ; and NAAG Client States et al v. 23andMe Holding Co. et 9 June 2025 al – https://www.pacermonitor.com/public/case/58476865/NAAG_Client_States_et_al_v_23andMe_Holding_Co_et_al

[16] US Senate Committee on the Judiciary, ‘Grassley, Cornyn Introduce Bipartisan Bill to Safeguard Consumers’ Genetic Data After 23andMe Bankruptcy Sparks Privacy Concerns’ (27 May 2025) https://www.judiciary.senate.gov/press/rep/releases/grassley-cornyn-introduce-bipartisan-bill-to-safeguard-consumers-genetic-data-after-23andme-bankruptcy-sparks-privacy-concerns; and see  S.1916 – Don’t Sell My DNA Act, S.1916 — 119th Congress (2025-2026) https://www.congress.gov/bill/119th-congress/senate-bill/1916/text/is.

[17] 23andMe, ‘23andMe at 16’ (28 April 2022) https://blog.23andme.com/articles/23andme-turns-16

[18] AM Phillips, Buying Your Self on the Internet: Wrap Contracts and Personal Genomics (Edinburgh University Press 2019) p11 citing Aaron Krol, ‘What comes next for direct-to-consumer genetics?’ (Bio IT World, 2015) http://www.bio-itworld.com/2015/7/16/what-comes-next-direct-consumer-genetics.html 

[19] Michael Levenson, ‘23andMe to Be Bought by Biotech Company for $256 Million’ The New York Times (19 May 2025) https://www.nytimes.com/2025/05/19/business/regeneron-pharmaceuticals-23andme-data.html

[20] BBC News, ‘Google invests in genetics firm’ (22 May 2007) http://news.bbc.co.uk/2/hi/business/6682451.stm; Larry Dignan, ‘Google goes biotech, invests in 23andMe’ ZDNET (22 May 2007) https://www.zdnet.com/article/google-goes-biotech-invests-in-23andme/ ; FIERCE Biotech, ‘Google hands $2.6M to 23andMe’ FIERCE Biotech (19 June 2009) https://www.fiercebiotech.com/biotech/google-hands-2-6m-to-23andme

[21] See 23andMe Holding Co., et al. Case No. 25-40976-357, United States Bankruptcy Court for the Eastern District of Missouri, Eastern Division  https://www.moeb.uscourts.gov/23andme-holding-co-information and also see https://www.pacermonitor.com/public/case/57373210/23andMe_Holding_Co; Ashley Capoot, ‘23andMe files for bankruptcy, Anne Wojcicki steps down as CEO’ CNBC (24 March 2025) https://www.cnbc.com/2025/03/24/23andme-files-for-bankruptcy-anne-wojcicki-steps-down-as-ceo.html

[22] Shiona McCallum, ‘YouTube CEO Susan Wojcicki steps down after nine years’ BBC (18 February 2023) https://www.bbc.com/news/technology-64675997

[23]AM Phillips, Buying Your Self on the Internet: Wrap Contracts and Personal Genomics (Edinburgh University Press 2019) p124, citing Erin Brodwin, ‘A collaboration between Google’s secretive life-extension spinoff and popular genetics company Ancestry has quietly ended’ Business Insider (1 August 2018) http://uk.businessinsider.com/google-calico-ancestry-dna-genetics-aging-partnershipended-2018-7?r=US&IR=T; GenomeWeb Staff Reporter, ‘AncestryDNA, Calico to Collaborate on Genetics of Human Longevity’ GenomeWeb (21 July 2015) https://www.genomeweb.com/business-news/ancestrydna-calico-collaborategenetics-human-longevity

[24] Alder, ‘23andMe Settles Data Breach Lawsuit for $30 Million’ The HIPAA Journal (16 September 2024) https://www.hipaajournal.com/23andme-class-action-data-breach-settlement/; A Bronstad, ‘Judge Approves 23andMe’s $30M Data Breach Settlement – With Conditions’ The Recorder (6 December 2024) https://www.law.com/therecorder/2024/12/06/judge-approves-23andmes-30m-data-breach-settlement—with-conditions/; and In re 23ANDME, Customer Data Sec. Breach Litig., 24-md-03098-EMC (N.D. Cal. Dec. 4, 2024) https://casetext.com/case/in-re-23andme-customer-data-sec-breach-litig-3/case-details.

[25] W Grantham-Philips, ‘23andMe files for Chapter 11 bankruptcy as co-founder and CEO Wojcicki resigns’ Associated Press (25 March 2025) https://apnews.com/article/23andme-chapter-11-bankruptcy-wojcicki-resigns-9827549d9171a537e76f60cb950d1823; A Zilber, ‘DNA testing pioneer 23andMe files for bankruptcy as concerns mount over data privacy of 15M customers’ The New York Post (24th March 2025) https://nypost.com/2025/03/24/business/dna-firm-23andme-files-for-bankruptcy/ ; and Attorney General Bonta, ‘Attorney General Bonta Urgently Issues Consumer Alert for 23andMe Customers’ (Press Release 21 March 2025) https://oag.ca.gov/news/press-releases/attorney-general-bonta-urgently-issues-consumer-alert-23andme-customers.

[26] Regeneron, ‘Regeneron Enters into Asset Purchase Agreement to Acquire 23andMe® for $256 Million; Plans to Maintain Consumer Genetics Business and Advance Shared Goals of Improving Human Health and Wellness’ (Press Release, 19 May 2025) https://newsroom.regeneron.com/news-releases/news-release-details/regeneron-enters-asset-purchase-agreement-acquire-23andmer-256.

[27] Staff Reporter, ‘Wojcicki, TTAM Research Institute’s $305M Offer Wins Bidding for 23andMe in Second Auction’ GenomeWeb (13 June 2025) https://www.genomeweb.com/business-news/wojcicki-ttam-research-institutes-305m-offer-wins-bidding-23andme-second-auction.

[28] Anthony Ha, ‘Congress has questions about 23andMe bankruptcy’ TechCrunch (19 April 2025) https://techcrunch.com/2025/04/19/congress-has-questions-about-23andme-bankruptcy/; see the letter from Representatives Brett Guthrie, Gus Bilirakis, and Gary Palmer to 23andMe https://d1dth6e84htgma.cloudfront.net/04_17_2025_E_and_C_Letter_to_23and_Me_5c8d4032a7.pdf.

[29] C Loizos, ‘23andMe customers notified of bankruptcy and potential claims — deadline to file is July 14’ TechCrunch (11 May 2025) https://techcrunch.com/2025/05/11/23andme-customers-notified-of-bankruptcy-and-potential-claims-deadline-to-file-is-july-14/ ; A Raine, ‘Rule 23 And ME: The Problem With Class Action Lawsuits’ NULJ (22 February 2023) https://www.thenulj.com/nuljforum/classaction.

[30] Christi Guerrini and Amy McGuire, ‘The 23andMe Bankruptcy: Privacy Considerations and a Call to Action (Part 2)’ The Petrie Flom Centre Bill of Health (7 May 2025) https://petrieflom.law.harvard.edu/2025/05/07/the-23andme-bankruptcy-privacy-considerations-and-a-call-to-action-part-2/; and Dietrich Knauth, ‘23andMe will have court-appointed overseer for genetic data in bankruptcy’ Reuters (1 May 2025) https://www.reuters.com/sustainability/boards-policy-regulation/23andme-will-have-court-appointed-overseer-genetic-data-bankruptcy-2025-04-29/.

[31] Rylee Kirk, ‘23andMe Customers Did Not Expect Their DNA Data Would Be Sold, Lawsuit Claims’ The New York Times (10 June 2025) https://www.nytimes.com/2025/06/10/business/23andme-data-lawsuit.html#:~:text=The%20genetic%2Dtesting%20company%2C%20which,the%20data%20without%20express%20consent; Case 25-04035  https://www.doj.state.or.us/wp-content/uploads/2025/06/Dkt-1-Complaint.pdf ; and NAAG Client States et al v. 23andMe Holding Co. et 9 June 2025 al – https://www.pacermonitor.com/public/case/58476865/NAAG_Client_States_et_al_v_23andMe_Holding_Co_et_al.

[32] House Committee on Oversight and Government Reform, ‘Securing Americans’ Genetic Information: Privacy and National Security Concerns Surrounding 23andMe’s Bankruptcy Sale’ (10th June 2025) – full committee hearing available here https://oversight.house.gov/hearing/securing-americans-genetic-information-privacy-and-national-security-concerns-surrounding-23andmes-bankruptcy-sale/ ; also see House Committee on Oversight and Government Reform, ‘Wrap Up: Congress Taking Action to Ensure the Safety of Americans’ Personal DNA Data’ (Press Release, 10 June 2025) https://oversight.house.gov/release/wrap-up-congress-taking-action-to-ensure-the-safety-of-americans-personal-dna-data/

[33] US Senate Committee on the Judiciary, ‘23 and You: The Privacy and National Security Implications of the 23andMe Bankruptcy’ – full committee hearing recording available here https://www.judiciary.senate.gov/committee-activity/hearings/23-and-you-the-privacy-and-national-security-implications-of-the-23andme-bankruptcy; and see Senator Chuck Grassley, ‘Grassley Opens Judiciary Hearing On The Privacy And National Security Implications Of 23andMe Bankruptcy’ (prepared opening statement, 11th June 2025) https://www.grassley.senate.gov/news/remarks/grassley-opens-judiciary-hearing-on-the-privacy-and-national-security-implications-of-23andme-bankruptcy.

[34] US Senate Committee on the Judiciary, ‘23 and You: The Privacy and National Security Implications of the 23andMe Bankruptcy’ – this is quoted from the video recording of the full committee hearing available here https://www.judiciary.senate.gov/committee-activity/hearings/23-and-you-the-privacy-and-national-security-implications-of-the-23andme-bankruptcy.

[35] US Senate Committee on the Judiciary, ‘23 and You: The Privacy and National Security Implications of the 23andMe Bankruptcy’ – this is quoted from the video recording of the full committee hearing available here https://www.judiciary.senate.gov/committee-activity/hearings/23-and-you-the-privacy-and-national-security-implications-of-the-23andme-bankruptcy.

[36] Ibid.

[37] AM Phillips, Buying Your Self on the Internet: Wrap Contracts and Personal Genomics (Edinburgh University Press 2019) pp182-7.

[38] M Sullivan, ‘23andMe has signed 12 other genetic data partnerships beyond Pfizer and Genentech’ (14 January 2015) VentureBeat https://venturebeat.com/2015/01/14/23andme-has-signed-12-other-genetic-data-partnerships-beyond-pfizer-and-genentech/  ; Christine Lagorio-Chafkin, ‘23andMe Exec: You Ain’t Seen Nothing Yet’ (7 January 2015) Inc http://www.inc.com/christine-lagorio/23andMe-newpartnerships.html.

[39] US Senate Committee on the Judiciary, ‘Grassley, Cornyn Introduce Bipartisan Bill to Safeguard Consumers’ Genetic Data After 23andMe Bankruptcy Sparks Privacy Concerns’ (27 May 2025). https://www.judiciary.senate.gov/press/rep/releases/grassley-cornyn-introduce-bipartisan-bill-to-safeguard-consumers-genetic-data-after-23andme-bankruptcy-sparks-privacy-concerns; and see  S.1916 – Don’t Sell My DNA Act, S.1916 — 119th Congress (2025-2026) https://www.congress.gov/bill/119th-congress/senate-bill/1916/text/is.

[40] US Senate Committee on the Judiciary, ‘Grassley, Cornyn Introduce Bipartisan Bill to Safeguard Consumers’ Genetic Data After 23andMe Bankruptcy Sparks Privacy Concerns’ (27 May 2025) https://www.judiciary.senate.gov/press/rep/releases/grassley-cornyn-introduce-bipartisan-bill-to-safeguard-consumers-genetic-data-after-23andme-bankruptcy-sparks-privacy-concerns .

[41] Katrina Costa, ‘AI and the future of generative biology’ Sanger Science (17 October 2024) https://sangerinstitute.blog/2024/10/17/ai-and-the-future-of-generative-biology/ ; Jim Thomas, ‘Black Box Biotech’ Briefing Paper African Centre for Biodiversity (ACB), together with Third World Network (TWN) and ETC Group (September 2024) https://www.etcgroup.org/content/black-box-biotechnology; M Wang, et al, ‘A call for built-in biosecurity safeguards for generative AI tools’ (2025) Nat Biotechnol https://doi.org/10.1038/s41587-025-02650-8.

[42] AM Phillips, Buying Your Self on the Internet: Wrap Contracts and Personal Genomics (Edinburgh University Press 2019);  SI Becher and AM Phillips, ‘Data Rights and Consumer Contracts: The Case of Personal Genomic Services’ in D Clifford, KH Lau, JM Paterson (eds), Data Rights and Private Law (Hart Publishing, 14 December 2023). Earlier draft available at SSRN: https://ssrn.com/abstract=4180967; I jointly presented at PrivacyCon – AM Phillips and J Charbonneau, ‘Giving away more than your genome sequence?:Privacy in the Direct-to-Consumer Genetic Testing Space’ (https://www.ftc.gov/policy/public-comments/2015/10/09/comment-00057) American Federal Trade Commission’s PrivacyCon (January 2016).

[43] ICO, “23andMe fined £2.31 million for failing to protect UK users’ genetic data” (News, 17 June 2025) https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2025/06/23andme-fined-for-failing-to-protect-uk-users-genetic-data/; see also  the ICO, Penalty Notice – 23andMe, Inc (5 June 2025) https://ico.org.uk/media2/kclbljpo/23andme-penalty-notice.pdf.

[44] ICO, “23andMe fined £2.31 million for failing to protect UK users’ genetic data”.

[45] Ibid; also see Privacy Laws & Business, “ICO fines DNA testing company 23andMe £2.31 Million”. http://xlpkz.mjt.lu/nl3/2sxB3-wx1hD9J_y4S4EAIQ?m=AV8AAHBbhp4AAc523IYAAR7sV1sAAYAyHtEAnRiUAA6KKgBoUY-i9J7fb3fNT-W-MuZPZ_dHEwAOYZc&b=bd2b381c&e=744bebfc&x=IQqvNdhRZblt2qg1LKXuRZ-FIUDAgEu6z6keowWxBJ8.

This entry was posted in

Posts Swedish Health Law

Comments

0 Comments Leave a comment

AI in Healthcare and the Liability Vacuum in EU Law

AI-generated image created using Microsoft Copilot, 2025

Petra Holmberg*

A Technological Crossroads

Artificial intelligence (AI) potentially poses multiple threats, and the concerns have become more visible in recent years.[1] In 2023, the Future of Life Institute issued an open letter urging a pause in AI development.[2] In 2024, the Center for AI Safety published a statement equating AI-related risks with those of pandemics and nuclear war.[3] These calls for caution gained traction largely due to the support of prominent AI researchers and industry leaders.

Amid these warnings, AI has been introduced to critical sectors, in particular, healthcare. This raises an essential question: Is the deployment of AI in healthcare a ticking time bomb or a gateway to revolutionary medical advancement? The answer is partly in how effectively legal frameworks can ensure safety, liability, and trust in AI systems.

The EU’s Regulatory Response

Recognising the risks and opportunities posed by AI, the European Commission proposed, and later adopted, the world’s first comprehensive legal framework for AI: the Artificial Intelligence Act (AI Act).[4] The Commission justified this legislative move by stressing the need for “trustworthy AI” that upholds safety, health, fundamental rights, and democratic values.[5] Although existing legislation provided some protection, it was not sufficient to address the specific challenges that AI systems can pose.[6]

Under the AI Act, systems that could significantly impact individuals’ health and safety, such as AI-powered medical devices, are classified as “high-risk.” These high-risk systems must meet the strictest safety and transparency standards, ensuring that their use aligns with the values enshrined in the EU Charter of Fundamental Rights.[7] Although the AI Act introduces strong preventive measures, questions remain about how effectively it addresses liability when AI systems malfunction.

Trust is paramount in healthcare, where errors can have life-altering consequences. Accordingly, the concept of “trustworthy AI” has been framed around safety, particularly patient safety, and legal responsibility for harm caused.[8] Given the sensitivity of AI use in healthcare to improve patient health, safety guarantees for patients are essential.

Challenge of Liability Guarantees

The European approach to liability in AI-powered medical devices is complex. It integrates traditional product liability principles based on the Directive on liability for defective products with new considerations brought by AI’s unpredictability and opacity.[9] The established model of European product liability law strives for a balanced allocation of risks between manufacturers and users.[10] However, AI challenges this equilibrium.

Medical AI systems, particularly those based on deep learning, often operate as “black boxes” to various degrees. Their decision-making processes are not totally transparent, even to their developers. As a result, the traditional concept of “defect,” typically applied to physical flaws in products, becomes difficult to define in an algorithmic context. This lack of transparency complicates efforts to establish a standard for defectiveness or to assign fault in the event of harm.[11] This situation has sparked a debate over the adequacy of existing liability frameworks and the need for a new legal paradigm. The current benchmarks for liability do not reflect AI’s evolving behaviour.[12] A reimagined liability regime is, therefore, essential to closing the gaps that AI technologies have opened.

The AI Act reflects an awareness that different AI systems pose various levels of risk. AI-powered medical devices, which may directly influence diagnoses or treatment decisions, are considered high-risk due to their potential to infringe on patients’ health and safety.[13] Importantly, safety and liability are regulated by distinct legal mechanisms. While the AI Act imposes rigorous safety standards for high-risk AI systems, it cannot completely eliminate the risk of harm. As such, a clear liability framework is needed to complement preventive regulation.[14]

Yet, enforcing liability for AI-caused harm is anything but straightforward. The black box problem is a lack of transparency in many AI algorithms. Many deep machine learning algorithms and other advanced AI algorithms are inherently non-transparent technologies.[15] The black box nature of many AI systems makes it difficult, sometimes even impossible, for patients to prove the causality link. Patients are burdened with proving not only that they were harmed but also that the AI system was defective and directly caused the harm, an often impossible task.[16]

The Withdrawn Directive – A Missed Opportunity

To address this problem, the European Commission proposed the Artificial Intelligence Liability Directive.[17] It aimed to facilitate compensation claims by introducing a presumption of a causal link in specific situations involving high-risk AI systems. Under Article 4 of the proposed directive, a presumption of causality would arise when:

  1. The manufacturer or a person for whose behaviour is responsible failed to comply with a duty of care.
  2. It was reasonable to assume that an error in the AI system contributed to the output (or lack thereof).
  3. The patient could demonstrate that the AI system’s output (or failure) caused harm.[18]

Had it been enacted, the directive would have represented a landmark in AI liability for patients harmed by high-risk AI systems.

However, surprisingly, the European Commission withdrew the proposal from its 2025 work program. The rationale was a lack of foreseeable agreement.[19] This explanation was met with scepticism because it was made even before the rapporteur’s report was published. The European Parliament’s rapporteur, Axel Voss criticised the decision, stating: “Big Tech firms are terrified of a legal landscape where they could be held accountable… Instead of standing up to them, the Commission has caved.”[20]

The withdrawal of the directive does not mean patients are without protection. If harm arises from medical malpractice, national laws still apply.[21] However, when the AI system itself is defective, patients find themselves in a legal grey zone. The AI Act and existing product liability rules only offer limited recourse. The revised Directive on liability for defective products from 2024 includes provisions specific to software-based products, acknowledging the complexities of AI. Notably, Article 9 introduces a presumption of defectiveness when proving causality is difficult and the harm likely stems from a product defect. While promising on paper, this provision lacks detailed requirements. It delegates the final decision to national courts, which must determine whether an AI system is technically or scientifically complex enough to apply the presumption. This opens the door to inconsistent outcomes across the EU, where one Member State may find an AI system too complex while another does not. Manufacturers may then choose to market their products only in Member States with lower liability exposure, undermining the EU’s goal of a harmonised internal digital market. This also clearly contradicts the promise made by Ursula von der Leyen at the AI Action Summit that the AI Act would provide businesses with clearer regulatory requirements for users, but primarily for manufacturers.[22]

Conclusion

The decision to withdraw the AI Liability Directive marks a significant setback in Europe’s efforts to regulate artificial intelligence. It weakens the AI Act’s core ambition – to foster trustworthy AI – and undermines the EU’s pledge to ensure high safety standards and liability.

Patients are left vulnerable without a unified legal mechanism to address liability for harm caused by high-risk AI systems. The question of liability is left to national courts without defining the precise criteria to be applied in such cases, which creates an additional administrative burden. This move clearly contradicts the AI Act’s fundamental purpose and weakens the newly adopted legislation. In my opinion, the classification of high-risk AI loses one of the fundamental purposes of the regulation, namely, to guarantee enforceable liability on AI manufacturers. If EU leaders are serious about becoming a global leader in ethical AI, they must revisit the question of liability. Trustworthy AI cannot exist without a transparent liability mechanism. Hopefully, future legislative efforts will address this void and restore confidence for both patients and manufacturers.


* Petra Holmberg is a postdoctoral researcher at the Department of Law, Lund University.

[1] Cerf M. and Waytz A. (2023). If you worry about humanity, you should be more scared of humans than of AI. Bulletin of the Atomic Scientists79(5), 289–292.

[2] Future of Life. Pause Giant AI Experiments: An Open Letter. (22 March 2023). Retrieved (30 April 2025): Pause Giant AI Experiments: An Open Letter – Future of Life Institute

[3] Center for AI Safety. Statement on AI Risk. (2024). Retrieved (30 April 2025): Statement on AI Risk | CAIS

[4] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828. OJ L, 2024/1689.

[5] Article 1(1) AI Act.

[6] European Commission. Shaping Europe´s digital future – AI Act. Retrieved (30 April 2025): AI Act | Shaping Europe’s digital future

[7] European Union. “Charter of Fundamental Rights of the European Union.” Official Journal of the European Union C83, vol. 53, European Union, 2010, p. 380.

[8] World Health Organization. (2021). Ethics and governance of artificial intelligence for health: WHO guidance Executive summary. ISBN 978-92-4-003740-3.

[9] Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products and repealing Council Directive 85/374/EEC. OJ L, 2024/2853.

[10] Haftenberger A. and Dierks C. (2023). Legal integration of artificial intelligence into internal medicine: Data protection, regulatory, reimbursement and liability questions. Med (Heidelb), 64(11),1044-1050.

[11] Schneeberger D., Stöger, K. and Holzinger, A. (2020). The European Legal Framework for Medical AI. In: Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2020. Lecture Notes in Computer Science, vol 12279. Springer.

[12] Duffourc MN. and Gerke S. (2023). The proposed EU Directives for AI liability leave worrying gaps likely to impact medical AI. NPJ Digit Med, 6(1):77.

[13] Article 6 AI Act.

[14] Shavell S. (1984). Liability for harm versus regulation of safety. Journal of Legal Studies. 13(2), 209-414.

[15] Statens Medicinsk-Etiska Råd. Kort om Artificiell intelligens i hälso- och sjukvården. (2022). Retrieved (30 April 2025): smer-2020-2-kort-om-artificiell-intelligens-i-halso-och-sjukvarden.pdf

[16] Article 10 Directive on liability for defective products.

[17] Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). COM/2022/496 final.

[18] Article 4(1) AI Liability Directive.

[19] Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. Commission work programme 2025 Moving forward together: A Bolder, Simpler, Faster Union. COM/2025/45 final.

[20] IAPP. European Commission withdraws AI Liability Directive from consideration. (12 February 2025). Retrieved (30 April 2025): European Commission withdraws AI Liability Directive from consideration | IAPP

[21] Article 168(7) TFEU.

[22] IAPP. European Commission withdraws AI Liability Directive from consideration. (12 February 2025). Retrieved (30 April 2025): European Commission withdraws AI Liability Directive from consideration | IAPP

This entry was posted in

Posts Swedish Health Law

Comments

0 Comments Leave a comment

What is the current status of drug addicts in South Korea, and what are the policy issues for social rehabilitation?

Song-Hee Lee*

1. The need for social rehabilitation services for drug addicts in South Korea

Recently, the number of drug users has been increasing rapidly in many countries, including South Korea and Sweden, and the issue of drugs has become a major social issue, with the South Korean government declaring a ‘war on drugs.’ According to the Ministry of Justice of South Korea, the number of drug offenders reached 9,984 in 2014 and increased to 27,611 as of December 2023.[1]       

Drug-related crimes, especially among the younger generation, are on the rise in South Korea. According to a survey conducted by the Ministry of Food and Drug Safety on the public’s perception of the seriousness of drug-related crimes in 2020, the 20-something generation has the lowest level of awareness of the seriousness of drug-related crimes.[2] In addition, the side effects of illegal, excessive, and redundant prescription of medical drugs in Korea were also found to be serious, with one in 2.7 Koreans reporting that they had used anesthetics, painkillers, and appetite suppressants.[3] This phenomenon indicates that the level of recidivism among drug users in Korea has reached a point where criminal punishment alone is no longer sufficient and suggests the need to strengthen policies on prevention, treatment, and social rehabilitation for drug addicts.

This study aims to examine the current status and reality of drug addicts in South Korea, which has been increasing recently, and to suggest relevant policies and social rehabilitation services for drug addicts in Korea, as well as some implications for drug addiction prevention and social rehabilitation in Sweden in the future.

2. Current Status of Drug Addicts in Korea

Current Status of Drug Addicts

 According to the “2023 White Paper on Narcotics Crime” by the Supreme Prosecutors’ Office of Korea, the number of drug offenders in Korea was about 10,000 from 1999 to 2002, during the IMF crisis in Korea. However, the number of people who have used drugs has increased to 11,916 since 2015 and has continued to increase since then. The main reason for this increase is believed to be the creation of an environment in which not only people with a history of drug use but also the general public who have never used drugs can easily purchase drugs using the Internet and social media.

For this reason, the number of young South Korean drug offenders in their 20s and 30s has been increasing recently, and as of December 2023, they accounted for 54.5% of all drug offenders. Moreover, the number of drug criminals is on the rise as the supply of drugs through illegal online platforms is becoming more active, with South Korean teenagers also easily purchasing drugs through the darknet.[4]

Drug Offenders’ Recidivism and Causes of Crime

Drug addiction is highly likely to recur after a single drug use, and as of 2021, the recidivism rate for drug offenders in South Korea was 36%, 35.0% in 2022, and 32.8% in 2023. The reason for this high recidivism rate is that opportunities for treatment and rehabilitation are not provided adequately and are not sufficient.[5] Looking at the status of drug-related crimes in South Korea in 2023 by cause, the highest percentage of cases of cannabis were caused by addiction (18.4%) and curiosity (15.6%), excluding other causes, and the highest percentage of cases of drugs were caused by curiosity (19.2%) and unknown reasons (11.0%). Therefore, it is expected that depending on the type of drug, the content of education and the approach to education will need to be organized according to the cause when designing future prevention education and programs.

3. Drug-related policies and social rehabilitation services in South Korea

Drug-related laws and policies in South Korea

Until the 1990s, the South Korean government mainly focused on supply control policies and promoted policies emphasising strong punishment. As a result, South Korea has enacted the Narcotics Control Act, the Special Act on Prevention of Illegal Trade in Narcotics, the Criminal Act, and the Act on Aggravated Punishment, etc. for Specific Crimes, which are representative of the laws and regulations related to narcotics in South Korea. And since the late 1990s, when the problem of drug addiction in South Korea emerged as a serious social issue, the government has been trying to come up with comprehensive measures, including demand suppression policies, at the government level, such as forming the “Drug Countermeasures Council,” a council under the Prime Minister (Prime Minister’s Decree No. 739, April 25, 2019).

However, medical institutions for drug addicts in Korea are inadequate. Although the number of drug treatment and protection institutions has expanded to 24 as of 2023, the number of designated beds has only reached 292.

Case of drug addiction rehabilitation service support[6]

Currently, the representative organization in Korea that helps drug addicts in their social rehabilitation is the Korea Anti-Drug Movement Headquarters. The Korea Anti-Drug Movement Foundation was established on April 22, 1992, by the Korean Pharmaceutical Association in the form of a foundation.

The foundation is based on Article 51-2 of the Narcotics Control Act, and in accordance with this, it carries out activities such as public relations, enlightenment, and education for the prevention of narcotics and drug abuse, research and studies, treatment and rehabilitation, social welfare for reintegration, international exchanges with international non-governmental organizations and groups, establishment and operation of counseling centers for narcotics and drug abuse, operation by experts and volunteers. The main projects include the healthy development and protection of children and adolescents, education programs for the professionalization of women’s workforce, and other tasks delegated by the Minister of Food and Drug Safety regarding the management of narcotics. In particular, since 2024, the organization has been playing various roles to expand the Korea Anti-Drug Movement Headquarters’ Addiction Rehabilitation Center nationwide.

As drug abuse has been on the rise recently, the Korea Anti-Drug Movement Headquarters has established 17 addiction rehabilitation treatment centers (Together One Step Centers) across the country by 2024. These centers were established to implement a “judicial-treatment-rehabilitation linkage model” that supports the reintegration of drug offenders into society. The center provides psychological support programs such as recovery support programs and recovery experience counseling for all those in need of help with drug addiction. It also provides a customized social rehabilitation program for addicts and offers a monitoring service for drug-free management after the program. In addition, it offers a program for the families of drug addicts. It also provides services in cooperation with treatment hospitals, residential facilities, and related organizations in the region. The organization operates a 24-hour drug helpline to help people experiencing drug problems receive support.

4. Recommendations

Currently, South Korea’s drug policy is in the introduction stage. Until now, South Korea’s drug policy has been focused on crackdowns and arrests, and although some policies for rehabilitation have been put in place, they are still insufficient. However, based on South Korea’s efforts to address the growing drug problem in the country, I would like to reflect on some lessons learned from Korean experience.

First, intensive drug prevention education is needed for the younger generation, especially high-risk youth, in various ways, including online and offline. And for those who have experience with drug use, social rehabilitation programs need to be established so that treatment and rehabilitation services can be provided early. As drug abuse is on the rise in South Korea, the Ministry of Health and Welfare and the Ministry of Food and Drug Safety are focusing on training drug addiction treatment specialists, which is yet another step towards better education and training of specialists, necessary all over the world.

Second, for drug users, multidimensional policies should be established to help them with treatment, rehabilitation, and social adaptation. In South Korea, treatment, education, and rehabilitation counseling for drug users are provided at centers across the country, but educational and counseling programs for their families are still lacking. In addition, although the focus is on training experts in narcotics, it has been pointed out that there are no additional support measures for the treatment or working environment of professional personnel.

In the case of Sweden, it is understood that drug-related information and prevention education are provided by various organizations, including the Public Health Agency and local governments. Treatment and social rehabilitation services for drug addicts should be provided to drug users, their families, experts, and community service providers to prevent recidivism and promote social integration. Therefore, policies should be promoted and services should be provided in cooperation with various authorities, local governments, and private institutions. To this end, support policies such as improved treatment and incentives for multidisciplinary professionals should also be put in place.


* Song-Hee Lee is a Research Fellow and Ph.D. in Social Welfare Policy Research Center at Seoul Welfare Foundation.

[1] Supreme Prosecutors Office. 2023 Drug Control in Korea; Supreme Prosecutors Office: Seoul, Republic of Korea, 2024.

[2] Lee, S.H., Baik, H. and Kim, J. W. Comparison of Seoul’s Drug Addiction Policy and Social Rehabilitation Service Development with Overseas Cases; Seoul Metropolitan City Seoul Welfare Foundation: Seoul, Republic of Korea, 2023.

[3] Baik, H., Kim, S., Hong, H., Lee, J., Shin, Y. J. Exploring the Influencing Factors of Entry into Social Rehabilitation Services through the Recovery Support Experience of Recovery Counselors Working in the Area of Drug Rehabilitation Services : Using Focus Group Interview. J. Korea Contents Assoc. 2023, 23, 610–620.

[4] Ibid.

[5] Lee, S. H., Baik, H., Kim, J. W. Comparison of Seoul’s Drug Addiction Policy and Social Rehabilitation Service Development with Overseas Cases; Seoul Metropolitan City Seoul Welfare Foundation : Seoul,Republic of Korea, 2023

[6] Korean Drug Prevention Headquarters homepage (2025), https://www.drugfree.or.kr/

This entry was posted in

Posts

Comments

0 Comments Leave a comment

Older Posts