Study: AI Can Double Patient Understanding of Radiology Reports

Study: AI Can Double Patient Understanding of Radiology Reports.webp

London, February 17 Artificial intelligence could soon help patients understand complex medical scan results, making them far easier to understand without sacrificing clinical accuracy, a major new study by the University of Sheffield suggests.

The research found that when radiology reports for X-rays, CT, and MRI scans were rewritten using advanced AI systems such as ChatGPT, patients found them almost twice as easy to understand compared to the original versions.

The analysis showed that the reading level dropped from "university level" to one more closely aligned with the comprehension of a school pupil aged 11-13.

The findings suggest that AI-assisted explanations could become a standard companion to medical reports, helping to improve transparency and trust across healthcare systems, including the National Health Service (NHS).

Researchers reviewed 38 studies published between 2022 and 2025, covering more than 12,000 radiology reports that had been simplified using AI. These rewritten reports were evaluated by patients, members of the public, and clinicians, to assess both patient understanding and clinical accuracy.

Radiology reports are traditionally written for doctors rather than patients. However, initiatives promoting patient-centered care, such as the NHS App, alongside new policies mandating greater transparency of medical records, mean patient access to these reports has expanded rapidly.

Lead author of the study, Dr. Samer Alabed, Senior Clinical Research Fellow at the University of Sheffield and Honorary Consultant Cardio Radiologist at Sheffield Teaching Hospitals NHS Foundation Trust, said: "The fundamental issue with these reports is that they are not written with patients in mind. They are often filled with technical jargon and abbreviations that can easily be misunderstood, leading to unnecessary anxiety, false reassurance, and confusion.

"Patients with lower health literacy or English as a second language are particularly disadvantaged. Clinicians frequently have to use valuable appointment time explaining report terminology instead of focusing on care and treatment. Even small time savings per patient could add up to significant benefits across the NHS."

While doctors reviewing these AI-simplified reports found that the vast majority were accurate and complete, around one percent contained errors, such as incorrect diagnoses. This shows that while this approach is highly promising, it still needs careful oversight.

Of the 38 studies reviewed, none were conducted in the UK or in NHS settings, a significant gap which Dr. Samer says the research team is now seeking to address.

"This research has highlighted several key priorities. The most important is the need for real-world testing in NHS clinical workflows to properly assess safety, efficiency, and patient outcomes," said Dr. Samer.

"This includes human-oversight models, where clinicians review and approve AI-generated explanations before they are shared with patients. Our long-term goal is not to replace clinicians, but to support clearer, kinder, and more equitable communication in healthcare," he added.

The research underscores the University's ambition to transform ideas into impact, a true embodiment of independent thinking and shared ambition.
 
Tags Tags
artificial intelligence chatgpt clinical accuracy clinical research ct scans health literacy healthcare transparency medical reports mri scans nhs (national health service) patient understanding patient-centered care radiology uk healthcare x-rays
Back
Top