ChatGPT shows promise for improving breast cancer education materials

ChatGPT can simplify responses to questions about breast cancer prevention and screening by improving the reading ease and readability of educational materials while maintaining overall clinical appropriateness, researchers have found.

The findings could translate to better support for patients navigating breast cancer screening, wrote a team led by Hana Haver, MD, of the University of Maryland Medical Intelligent Imaging Center in Baltimore. The study findings were published February 2 in Radiology: Imaging Cancer.

"In the context of breast cancer screening, increasing the readability of patient-facing material has been shown to improve patient follow-up for mammography recalls," the group noted.

The average U.S. adult reads at an eighth-grade level, and "adequate readability" of health materials is considered to be the ability to read at grades six to eight levels and below. When it comes to breast cancer screening, previous research has shown that women with low reading skill have less knowledge of mammography, and thus decreased screening rates.

Haver and colleagues explored whether the use of ChatGPT to simplify breast cancer prevention materials would improve their readability and thus be more useful to a wider population. They conducted a study in March and August 2023 that used ChatGPT to simplify responses to 25 questions about breast cancer to a sixth-grade reading level and assessed the simplified responses for clinical accuracy. The responses were evaluated for reading ease according to the Reading Ease Index, Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, Automated Readability Index, and the Simple Measure of Gobbledygook (SMOG) Index.

The team compared mean reading ease, readability, and word count between original and simplified responses using paired t-tests and used the McNemar test to identify the number of responses with adequate reading ease (i.e., a score of 60 or greater) and readability (equivalent to a sixth-grade level). (Examples of sample questions were "What are the symptoms of breast cancer?" and "My radiology report says BI-RADS 5. What does that mean?")

The researchers found the following:

  • ChatGPT improved mean reading ease (p < 0.001) and readability (p < 0.001) and decreased word count (p < 0.001).
  • The team considered 92% of ChatGPT's simplified responses clinically appropriate.
  • 100% of the tool's simplified responses met criteria for adequate reading ease, compared with only two of 25 original responses (p < 0.001).

"Using a simple prompt, ChatGPT was able to simplify these texts with improvements in readability by a mean of four grade levels while maintaining the overall accuracy of the recommendations," they reported.

ChatGPT as a tool to increase the accessibility of health materials shows significant promise, but more research is needed, according to the authors.

"The potential for ChatGPT and other LLMs to improve patient education in radiology represents an exciting step toward improving health literacy and health equity, though future study and validation are needed as these technologies rapidly improve and develop, as addressing the safety of such tools is paramount to meaningful application in a patient care setting," they concluded.

The complete study can be found here.


Page 1 of 365
Next Page