Doctors using AI share thoughts on its growing use in medicine
Doctors who use artificial intelligence (AI) see benefits for their own efficiency and for patient care in a resource-stretched NHS and, although they recognise there are risks, they feel confident in managing them, according to a new study published by the General Medical Council (GMC) today.
Researchers commissioned by the GMC sought to find out more about the types of AI doctors are using, how well they understand the risks and what they do if they disagree with the output of an AI system.
Doctors who had used AI in the past 12 months discussed the benefits, risks and their understanding of their professional responsibilities when using such technologies, in a series of in-depth interviews with researchers from Community Research.
Most saw benefits to their efficiency when using AI, seeing it as a way to save or make more use of their time. However, some queried this saying they lacked confidence in the accuracy of some diagnostic and decision support systems, and so spent more time checking the results they received.
Many doctors felt that NHS IT systems would need to improve to pave the way for a broader roll out of AI technologies, noting that many are highly specialised and still in the development stage. Doctors who currently use generative AI, such as Chat GPT, often do so through a current interest in AI.
Researchers spoke to a variety of doctors at different career stages and across specialties, from doctors in training, to consultants working in general practice, radiology, emergency medicine and more.
Three types of AI were discussed.
- Generative AI: which produces text or image following prompts.
- Diagnostic and decision support systems (DDS): which aid the use of multimedia data, for example by identifying the presence of tumour on an MRI scan. Also includes tools for supporting decisions, outlining risks and predicting patient outcomes.
- Systems efficiency systems (SES): which predict missed appointments, maximise staffing and resource allocations, or optimise care pathways based on provided data.
Doctors using generative AI often did so of their own accord, whereas those using DDS or SES did so through their workplace’s adoption of these technologies. The research found that many doctors were often unsure of which technologies constituted AI in their practice.
Those using generative AI had a keen interest in the technology and used it for a variety of reasons, including for administrative support, producing clinical scenarios and generating images for teaching. However, doctors were careful to specify that they knew no confidential information could be inputted, instead using AI to generate templates with personal information omitted.
Doctors using DDS in primary care did so to help prioritise which patients to see, find medication conflicts in prescriptions and suggest diagnostic tests. In secondary care, some ways used included to assist in assessing stroke patients and administering local anaesthesia.
Further benefits of AI use shared by doctors included the potential to reduce risk of human error and reduce bias through judgments made on patient characteristics. They also identified the limitless capacity AI had to draw from wider research, data or guidance on a particular topic, compared to an individual.
But doctors interviewed also understood that the emergent technologies presented risks. They saw potential for AI-generated answers to be based on data that could itself be false or biased. They also acknowledged possible confidentiality risks in sharing patient data and the potential for over reliance and deskilling.
Many said they feel confident to override decisions made by AI systems if necessary, and that ultimately the responsibility of patient care remains with them. Some did speculate if this may change in the future as systems become more sophisticated and looked to regulators, like the GMC, for more guidance going forward.
Shaun Gallagher, Director of Strategy and Policy at the GMC, said:
‘It’s clear that AI’s use in healthcare will continue to grow and projects like these give valuable insights into how doctors are using these systems day-to-day.
‘These views are helpful for us as a regulator, but also for wider healthcare organisations, in anticipating how we can best support the safe and efficient adoption of these technologies now, and into the future.’
The research gives further insights into the topic of AI following a recent study by the Alan Turing institute, supported by the GMC and published in October last year. In that study nearly 1,000 doctors were surveyed on their experiences with and perceptions of AI in their practice. It revealed just under a third (29%) had used some form of AI in the past 12 months, and 52% were optimistic about its use in healthcare. From this research, a sample of survey respondents were then approached to take part in the in-depth study.
Read the research
Understanding doctors’ experiences of using artificial intelligence (AI) in the UK
Find out more about how the GMC’s professional standards apply to AI and innovative technologies.