What does the emergence of Large Language Models like ChatGPT mean for healthcare and AI?

Over the past few months, we’ve regularly been asked what the emergence of large language models (LLMs) like ChatGPT means for Ada and healthcare more broadly. 

But how do generative AI platforms compare with the AI solutions already being used in healthcare, like Ada, and how might they fit into the ecosystem in the future?

AI in healthcare today

AI in healthcare isn’t new – we’ve been doing it at Ada for more than 10 years. We believe AI can help solve pressing global issues around staff shortage and burnout, availability, costs, and the ever-growing long-term condition burden, to name a few. 

At Ada, we use AI to help people better understand their symptoms and support better care experiences, both through our enterprise partnerships and consumer app. Ada’s consumer app is the leading AI symptom checker and health assessment solution. Our technology, which is based on probabilistic reasoning over a curated medical knowledge base, represents state-of-the-art for automated, accurate, and safe symptom assessment.

Currently, most use cases for deep learning-based AI solutions are for image analysis – for example, analyzing MRI scans to identify potential cancers and acting as a decision support tool. These AI platforms are widespread and increasingly effective.

What does ChatGPT do?

LLMs such as ChatGPT are a form of generative AI. These models make use of neural networks that are trained on huge data sets and form outputs by generating the most probable next word to form natural-sounding responses. 

In the case of ChatGPT, this training set comprises vast amounts of text and code scraped from the internet before the end of 2021. While this includes verified CDC health guidance, it, unfortunately, means that trusted information ranks alongside non-expert discussion, hearsay, and potential misinformation. As a result, it might produce misleading answers that sound plausible based on these sources with a high degree of confidence. More worrying, LLMs are capable of “hallucinating” – making up responses out of thin air based on what seems plausible to the machine, with no basis in its actual training data. Future regulation of healthcare AI will not only need to address the issue of bias reflected in training datasets but also account for the severe risks these hallucinations pose in healthcare settings.

This means that, for the moment, LLMs like ChatGPT are poorly suited for healthcare as there is currently no way of providing transparency and quality assurance. 

ChatGPT’s rapid rise in popularity has been down to its ease of use and wide-ranging applications – virtually anyone can use it to talk about virtually any topic. But this smooth-talking ability conceals the lack of truthfulness and accuracy and a propensity to “hallucinate.” In healthcare, where the stakes can very literally be life and death, this isn’t appropriate.

How is Ada different?

While ChatGPT provides answers that sound natural and plausible, it is not well suited for logic-based reasoning. It is a ‘black box’ system, meaning its decision-making process is completely opaque. 

As a certified medical device company, Ada operates in a strictly regulated domain. We, therefore, take a much more cautious and clinically-driven approach to AI, focusing on the highest levels of accuracy and safety. Ada is a white box system, meaning medical professionals have the ability to track how and why recommendations are generated – which is critical for trust and assurance. Our probabilistic reasoning engine has been designed to ‘think’ and interact in much the same way as a real-world doctor might, interfacing with users through a guided question flow and eventually suggesting what might be causing symptoms and navigating people to practical next steps. 

Our underlying comprehensive medical knowledge base was built by a team of more than 50 in-house doctors over the course of several years and one million hours, to ensure high medical quality through sophisticated disease modeling procedures and extensive testing using real-world clinical case scenarios, competitive comparisons, medical collaborations, external independent review, peer-reviewed research, and user feedback. We’re continuously reviewing our medical models for accuracy and updating with the latest best practice, regional-specific nuance, and protocols. 

This commitment to medical rigor, accuracy, and safety is demonstrated by our certification as an EU-MDR Class IIa medical device in Europe. This is one of the most robust health technology regulations in the world, and we are among the first AI platforms to receive such a designation.

Ada

LLMs

Guides users through a clearly structured dynamic assessment, prompting the patient with easy-to-understand questions

Needs patient-driven input of symptoms and is sensitive to small variations of user prompt

Has well-curated, clinically vetted, continuously updated, and medically sound knowledge base, accurately translated into defined languages

Uses large parts of internet information without strict quality control or human oversight

Capable of delivering accurate assessment and condition suggestions based on medically sound knowledge and reasoning

May offer inaccurate information sourced from the internet and give patients potentially harmful medical advice

Shows a clear probabilistic understanding of which causes are most likely

Has problems with calibration, resulting in over confident or misplaced advice

Assessments are deterministic, i.e. when entering identical information across multiple assessments, the user will get the same results each time

Assessments are non-deterministic, i.e., the user may get a different assessment results even after entering identical information twice

White-box algorithm, meaning it is transparent and explainable

Black-box algorithm, meaning it is hard to understand and explain how a certain result was derived

Future of LLMs in Healthcare

Where LLMs really shine is in understanding what a user means and responding in a way that seems natural. In time it’s possible that we could see LLMs deployed alongside other applications of AI as the initial contact point with a user in a healthcare scenario – provided the right constraints were in place.

In the more immediate future, LLMs could be used by clinicians to further automate routine written and administrative tasks. Time spent drafting letters to insurers, caregivers, and health administrators could almost be eliminated, giving clinicians more time to focus on value-added work and relieve burden on administrative staff. Ultimately, we would advocate for requirements that ensure patient data is securely ‘ring-fenced’ to protect sensitive clinical or personally identifiable information and prevent it from being used as training material for AI or exposed in any way. There would also need to be stringent data protection agreements and consent mechanisms in place to keep users in control of their own personal data.

Future of LLMs and Ada’s AI

As a long-time leader in the healthcare AI field, our focus has been on developing fit-for-purpose technology based on our own clinically accurate medical knowledge and data set, curated by our team of physicians and medical engineers and corroborated with an evidence-base from the most reliable and trustworthy sources. That’s why we’re the partner of choice for some of the world’s most prestigious healthcare organizations. As we continue to expand this data set through collaborations with our enterprise partners, we’re optimizing our reasoning engine to achieve ever-higher degrees of accuracy – using both real-world client feedback loops and hand-curated knowledge. 

Our focus is on maintaining medical safety and accuracy in real-life care settings. Should LLMs become viable for healthcare applications, there could be potential for Ada’s probabilistic reasoning to be combined with generative AI to create a unique and powerful proposition to improve patient outcomes at scale. We’re well-positioned to capitalize on this potential thanks to the strong foundation our medical knowledge and data sets provides.

Please enable JavaScript in your browser to complete this form.

I hereby agree to the processing of my personal data for the purpose of contacting me in accordance with Art. 6 para. 1 lit. a General Data Protection Regulation (GDPR).

Opt-In