Ana Nordberg,[1] Petra Holmberg[2] & Sarah de Heer[3]
Introducing ChatGPT Health
On January 7, 2026, OpenAI announced the upcoming launch of ChatGPT Health. Described to consumers as ’a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared, and confident navigating your health’.[4] Immediately, alarm bells have been triggered around the legal and public health implications.[5]
Likely, it will take some time before the tool is widely open to all users/subscribers. The recent announcement instructs ChatGPT subscribers to sign up for a waitlist to access the tool once it becomes available. This means that an undisclosed ‘small number’ of current subscribers (on either ChatGPT Free, Go, Plus, or Pro plans) will be selected as beta users to ‘refine the experience’ and will likely be asked to provide feedback and report problems prior to a broader public launch.
Beta users are restricted to those residing outside of the European Economic Area, Switzerland, and the United Kingdom. Likely due to the need to navigate regulatory constraints imposed by the GDPR, Medical Devices Regulation (MDR) and AI Act (and similar national legislation). However, OpenAI promises ’to expand access and make Health available to all users on web and iOS in the coming weeks’,[6] indicating that we should expect a global launch before the spring.
It is undisputed that, if used responsibly, such tools have the potential to lessen the threshold to seek health information and medical advice, supporting individuals in taking charge of their own health and facilitating disease prevention and self-care. On the other side, this is a worrying development considering the extent of privacy concerns and regulatory issues, including data bias and transparency. This piece briefly identifies and explores areas that raise legal and public health concerns.
Market-driven Tool
Thematically dedicated Large Language Models (LLMs) are a natural evolution of general-purpose tools like ChatGPT. After all, one of the main criticisms of general-purpose tools is their propensity to produce plausible but false statements (‘hallucinations’) and inaccurate results. Smaller, dedicated models are likely to be less problematic, particularly with retrieval-augmented generation (RAG) and fine-tuning.[7] They are trained on more carefully curated and higher-quality data.[8]
Health and wellness topics have been among the most frequently searched since the dawn of the internet. When asked about the proportion of health, health-related and wellness searches and keywords since its launch in 1998, Google AI mode informed that in the early years, health queries represented a very small fraction of daily searches globally. However, the weight of health and health-related or wellness searches has steadily increased. Today, more than 1 in every 20 searches is health-related.[9] A 2023 study estimated that by 2002, 4.5% of all web searches were health-related.[10] In recent years, this number has increased, stabilising at approximately between 5% to 7% of all daily queries.[11]
Assuming these statistics are accurate, over 70,000 health questions are searched per minute, up to more than 1 billion per day globally. Adding to these impressive numbers are a growing number of searches conducted on other engines and open LLM tools/services such as ChatGPT, as well as the use of apps and connected objects with chat or search functionalities, which are taking an increasingly prominent role and impact in the world’s daily digital life.
In 2025, Google had a 79,88% market share in the search engine market, a decline of 9,33% compared with historical data over the last 10 years. While Google’s market share has remained stable, the data suggest a steady decline in the use of traditional search engines since the release of Bing Chat in 2023-2024 and the subsequent widespread availability of ChatGPT.[12] Since then, Google’s search engine has also integrated AI tools and functionalities, and we predict that the use of LLMs to answer health-related queries is and will continue to be a growing global trend. For example, OpenAI announces that ‘over 230 million people globally ask health and wellness-related questions on ChatGPT every week’ with health-related prompts featuring prominently in the most common ways users deploy ChatGPT.[13]
At first glance, this new tool may seem like a safer place for users to get qualified medical advice. Especially given OpenAI’s guarantee that: ’You can securely connect medical records and wellness apps to capture conversations in your own health information, so that responses are more relevant and useful to you.’[14] However, regardless of this ‘good intentions’ rhetoric, the product features and announced safeguards give rise to more questions than answers.
Privacy Concerns
In its initial rollout, the product will not be accessible within the European Union. Early observations suggest that geo-blocking is in place, with EU/EEA users receiving notifications that the service is unavailable in their region. Such restrictions, however, are easily passed and are likely to be temporary. Although the timeline remains uncertain, the overall trajectory points toward a global launch and eventual integration with existing and emerging digital health products and services.
A central feature of the tool is user-driven personalisation. Individuals will be able to link the system to their medical records and to data streams from wellness applications such as Apple Health, Function, and MyFitnessPal.[15] The company claims that ChatGPT Health will incorporate ’additional, layered protections designed specifically for health—including purpose built‑ encryption and isolation to keep health conversations protected and compartmentalised.’ Yet the mechanisms through which these protections will operate remain opaque. It is also unclear whether the system will retain pseudonymised or anonymised data, and whether an EU/EEA-specific version will meet the requirements of the GDPR.
At present, the company is adamant that health data uploaded by users or accessed will not be used to train the general ChatGPT model. Nonetheless, the possibility remains that, after the 30-day window during which users may correct or delete their data, such information could be used to train ChatGPT Health itself.[16] The intention to leverage the general ChatGPT model’s chat to enhance the accuracy and relevance of ChatGPT Health’s outputs raises significant concerns. If the system incorporates wellness and medical data of uncertain origin, accuracy or reliability, users may be exposed to assumptions, flawed inferences, and potentially harmful guidance.[17] This risk extends beyond data protection to the safety and well-being of users.
Another unresolved issue concerns data storage. It is not yet known whether personal health data will remain stored on users’ devices or be transferred to company servers, the locations of which have not been disclosed.
Finally, users engaging with the general ChatGPT interface may be redirected to the health-specific product when raising medical queries. This strategy may be intended to address ongoing criticism of the handling of sensitive personal data, but it also underscores the need for transparency, regulatory oversight, and robust safeguards before such a system becomes widely available.
Certification, Data Bias and Transparency
A concern even more pressing than protecting sensitive data is the nature of the health‑related information the system will provide to users. Although the company emphasises that the product has been developed with input from healthcare professionals. The tool plainly falls within the definition of a medical device set out in Article 2 of the MDR.[18] However, the company simultaneously disclaims any diagnostic function. According to the developer, ChatGPT ’Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment.’[19] Such disclaimers appear strategically positioned to avoid classification as a medical device and, by extension, designation as a high‑risk AI system under the EU AI Act.[20]
If ChatGPT Health were recognised as both a medical device and a high‑risk AI system, the stringent requirements concerning data quality and bias mitigation set out in the AI Act would apply.[21] These safeguards are particularly crucial in the health domain, where biased or incomplete data can lead to erroneous or unsafe outputs.[22] The risk of inaccurate or misleading advice is heightened by the system’s likely reliance on medical records and data from wellness applications. Such apps frequently contain imprecise or user‑generated measurements, raising questions about reliability.[23] Although the developer asserts that only ‘certified’ applications will be integrated, no criteria for such certification have been disclosed.[24]
Even if ChatGPT Health ultimately avoids classification as a high‑risk AI system, it is likely to fall under the classification rules for General Purpose AI Model (GPAI) and may even be considered a system with systemic risk. It will thus be subject to the specific obligations of model evaluation, documentation, and risk mitigation under Articles 53 to 55 of the AI Act as they enter into force.[25]
The transparency obligations under Article 50 of the AI Act will also remain fully applicable. Users must be clearly informed that they are interacting with an AI system rather than a healthcare professional. However, the format, tone, and level of detail provided by ChatGPT Health may blur this distinction in practice. There is a genuine risk that users could interpret the system’s responses as professional medical guidance, potentially leading to self‑diagnosis or self‑medication despite the developer’s warnings.[26] This raises a broader concern: could ChatGPT Health inadvertently contribute to poorer health outcomes by offering inaccurate, incomplete, or overconfident advice while simultaneously discouraging users from seeking timely medical care?
Conclusion: Digital Public Health’s Bleary New Future(s)
ChatGPT Health and similar health-dedicated LLMs will generate a global impact on public health. We predict that such an impact on digital determinants of health will yield both positive and negative outcomes. Such tools will increase the availability of health information; facilitate health care decisions; and empower users to make healthier choices regarding health determinants or adopt better self-care measures.
On the other hand, more information is not always a proxy for good decisions, and there are significant privacy trade-offs in granting LLMs models access to individual health data. Digitalisation has also highlighted well-known equity imbalances in accessibility. The use of LLMs in health also raises the risk of a health infodemic, challenging public health communication and the trust relationship between patients and health providers. It is unknown to what extent such LLMs will be able to avoid ‘hallucinations’ and inaccuracies, and whether they will unintentionally contribute to the spread of disinformation, unproven alternative cures, and dangerous trends.
The impact on privacy and other fundamental rights will be significant, and the question remains unanswered: to what extent do current legal and regulatory frameworks provide sufficient protection?
[1] Ana Nordberg is an Associate Professor and senior lecturer at the Department of Law, Lund University.
[2] Petra Holmberg is a postdoctoral researcher at the Department of Law, Lund University.
[3] Sara De Heer is a doctoral candidate at the Department of Law, Lund University.
[4] https://openai.com/index/introducing-chatgpt-health/
[5] Mahon, L. (2026). OpenAI Launches ChatGPT Health to review your Health Records. BBC. https://www.bbc.com/news/articles/cpqy29d0yjgo;
Probets, J. (2026). The launch of ‘ChatGPT for health’ is both a threat and an opportunity. HSJ. https://www.hsj.co.uk/technology-and-innovation/the-launch-of-chatgpt-for-health-is-both-a-threat-and-an-opportunity/7040792.article;
Moreau, C. (2026). ChatGPT gears up to tap into users’ health information. Euroactiv. https://www.euractiv.com/news/chatgpt-gears-up-to-tap-into-users-health-information/
[6] Id. 1
[7] Kalai, A. T., Nachum, O., Vempala, S. S., & Zhang, E. (2025). Why language models hallucinate. arXiv. https://doi.org/10.48550/arXiv.2509.04664
[8] Gautam, A.R. (2025). Impact of High Data Quality on LLM Hallucinations’ International Journal of Computer Applications (0975 – 8887). 187 (4). https://www.ijcaonline.org/archives/volume187/number4/gautam-2025-ijca-924909.pdf
[9] Google AI mode, 16 January 2016.
[10] Gunther, E. & Kohler, Ch. (2003). What is the prevalence of health-related searches on the World Wide Web? Qualitative and quantitative analysis of search engine queries on the Internet. AMIA. Annual Symposium proceedings / AMIA Symposium. AMIA Symposium. 225-9.
[11] Dress, J. (2019). Google receives more than 1 billion health questions every day. Becker’s Hospital Review. https://www.beckershospitalreview.com/healthcare-information-technology/google-receives-more-than-1-billion-health-questions-every-day/
[12] Cardillo, A. (2025). How Many Google Searches Are There Per Day? https://explodingtopics.com/blog/google-searches-per-day
[13] https://openai.com/index/introducing-chatgpt-health/
[14] Id.10
[15] Id.10
[16] Id.10
[17] See e.g. Audrey Eichenberger, Stephen Thielke, Adam Van Buskirk. A Case of Bromism Influenced by Use of Artificial Intelligence. AIM Clinical Cases.2025;4:e241260. [Epub 5 August 2025]. doi:10.7326/aimcc.2024.1260
[18] Article 2(1) of the Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC. OJ EU, L 117, 5 May 2017, pp. 1–175.
[19] Id.
[20] Medical device is classified as high-risk AI based on the Article 6(1) of the Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828. OJ EU, L 168, 12 July 2024; Case C‑219/11 Brain Products GmbH v BioSemi VOF and Others [2012] ECLI:EU:C:2012:742.
[21] Chapter III AI Act
[22] Abdelwanis, M., Alarafati, H. K., Tammam, M. M. S., & Simsekler, M. C. E. (2024). Exploring the risks of automation bias in healthcare artificial intelligence applications: A Bowtie analysis. Journal of Safety Science and Resilience, 5(4), 460–469. https://doi.org/10.1016/j.jnlssr.2024.06.001
[23] Liang, Z. & Ploderer, B. (2020). How Does Fitbit Measure Brainwaves: A Qualitative Study into the Credibility of Sleep-tracking Technologies. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4 (1), 17-29. https://doi.org/10.1145/3380994
[24] https://openai.com/index/introducing-chatgpt-health/
[25] see also: ANNEX to the Communication to the Commission Approval of the content of the draft Communication from the Commission – Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act).
[26] Du, D., Paluch, R., Stevens, G., & Müller, C. (2024). Exploring patient trust in clinical advice from AI‑driven LLMs like ChatGPT for self‑diagnosis. arXiv:2402.07920.
