Your GP could be using ChatGPT to diagnose you, a study finds

Your GP could be using ChatGPT to diagnose you, a study finds

With the rise in artificial intelligence tools, GPs are turning to the likes of ChatGPT and Google Bard.

Save 40% when you subscribe to BBC Science Focus Magazine!

Credit: sturti

Published: September 17, 2024 at 10:30 pm

Artificial intelligence is seemingly everywhere. With the emergence of hundreds of tools and tricks powered by chatbots and large models, industries across the world are adopting it, including your local doctors.

In a recent survey published in the journal BMJ Health & Care Informatics, it was found that one in five GPs are using AI. This is despite a lack of guidance or any clear work policies in place as of yet.

The research argues that both doctors and medical trainees need to be trained in the use, as well as the pros and cons, of AI before using it. There is a continued risk of inaccuracies (known as hallucinations), as well as algorithmic biases and patient privacy risks.

A random sample of GPs were sent a survey to complete. Each doctor was asked if they had ever used ChatGPT, Bing AI, Google Bard or any other AI service in their work.



In total, 1006 GPs completed the survey. One in five respondents reported using generative AI tools in their clinical practice. Of these, more than 1 in 4 used the tools to generate patient documentation and a similar number said they used them to suggest diagnosis.

Of the AI tools mentioned in the study, ChatGPT was the most used by far, followed by Bing and Google Bard. However, of the 1006 replies that came in, the vast majority stated that they didn’t use any of these tools at work.

However, this research isn’t a direct comparison to the GP population in the UK, with the study using a small sample size. Further research is needed to understand the role of AI in the work of GPs.

“These findings signal that GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning. However, we caution that these tools have limitations since they can embed subtle errors and biases,” the researchers said.

“These tools may also risk harm and undermine patient privacy since it is not clear how the internet companies behind generative AI use the information they gather."

Currently, there is very little regulation on the use of AI. The UK, as well as other key players in the artificial intelligence market, are all looking into changes that can be made.

“The medical community will need to find ways to both educate physicians and trainees about the potential benefits of these tools in summarising information but also the risks in terms of hallucinations, algorithmic biases, and the potential to compromise patient privacy.”

Read more: