A recent survey has revealed how GPs are using artificial intelligence on the job.

It showed that a majority of general practitioners were open to using large language models (LLMs), a type of AI designed to predict and comprehend human language using large amounts of data.

 

 

A new age of medicine 

Firth Quadrant, a leading market research company, surveyed 183 Australian GPs to determine how they implemented AI into their everyday practices. 

The results revealed that nearly one-fifth of GPs are employing LLMs while on the job, and 70 percent would consider using them in the future.

The main reasons for implementing this technology include developing a better understanding of complex medical information. While 51 percent of GPs stated that they would employ an LLM to diagnose a patient, only 8 percent currently do so.

Similarly, doctors believe LLMs could help educate both patients and staff, with 67 percent of participants revealing they would use AI to educate patients, and 66 percent confirming they would use AI for staff training. 

Other reasons for turning to AI include keeping up with the newest medical research and streamlining the workload in clinics. 

 

 

Tricks and tools  

Already, AI is incorporated into the medical field in a number of ways. 

According to Monash University, AI algorithms can compile large amounts of medical data to provide a diagnosis, perform robotic surgery and act as virtual nursing assistants.

However, some believe that AI used for medical purposes doesn’t contain proper scientific rigor. Similarly, the tests run to ensure LLMs are accurate may be too narrow, resulting in an accidental bias within the software.

A study conducted by the Journal of Medical Internet Research found that LLMs and conversational agents can sometimes respond ineffectively to prompts and offer unsuitable medical advice. This reveals that while LLMs are a useful tool for support in medical clinics, they should not be relied on as the sole source of information. 

 

LLMs

 

Beyond the screen 

The Fifth Quadrant study reveals the need for effective policies to monitor the use of LLMs and ensure technology is being implemented responsibly. 

According to the Australian Medical Association (AMA), “the development and implementation of AI technologies must be undertaken with appropriate consultation, transparency, accountability and regular, ongoing reviews to determine its clinical and social impact and ensure it continues to benefit, and not harm, patients, healthcare professionals and the wider community.” 

To expand on these ethical questions, Fifth Quadrant offers a series of suggestions to protect doctors and patients when using LLMs:

  • Understand that LLMs have limitations and shouldn’t be the sole source of information when making clinical decisions 
  • Use LLMs alongside other resources such as medical databases, textbooks and expert insight
  • Ensure that the information provided by LLMs is verified by another source before proceeding 
  • Check whether LLMs have been subject to medical device regulation by the TGA 

Medical device regulations are the required checks performed by the TGA when LLMS are being employed for medical purposes. These regulatory checks ensure the safety and reliability of LLMs are of the same quality as other medical devices including databases and workbooks. This creates a base level of accuracy among all diagnostic tools. 

Fifth Quadrant acknowledges the usefulness of LLMs in supporting doctors and GPs, however warns that ongoing tests and conversations are necessary to ensure Aussies are receiving the best possible care. 

To learn more about the use of technology in treating illness, click here.