JMIR Med Inform. 2026 Apr 8;14:e80553. doi: 10.2196/80553.
ABSTRACT
BACKGROUND: Upper and lower extremity lymphedema is a chronic, progressive condition that significantly impairs the quality of life of affected patients. Despite the recently established effectiveness of physical therapy and supermicrosurgical interventions, current guidelines frequently lag behind emerging evidence and commonly do not offer stage-specific treatment algorithms. This gap in evidence-based guidance may prompt clinicians with limited experience to seek support from large language models such as ChatGPT.
OBJECTIVE: Given the potential of artificial intelligence to rapidly integrate emerging research, this study evaluated how clinicians from different professional backgrounds rate the quality and reliability of personalized lymphedema management recommendations generated by ChatGPT.
METHODS: In this exploratory cross-sectional study, ChatGPT generated treatment recommendations for 6 standardized lymphedema case scenarios. An international panel of 67 participants (resident doctors, board-certified specialists, physiotherapists, and advanced practice nurses) from 34 institutions across 11 countries assessed the recommendations using a modified DISCERN questionnaire with a 9-point agreement scale ranging from 1 (completely disagree) to 9 (completely agree). Ratings were summarized as pooled means with variability measures and compared across clinician groups (residents vs board-certified physicians vs physiotherapists or advanced practice nurses) using group comparison testing.
RESULTS: ChatGPT was rated most favorably for diagnostic accuracy and treatment relevance, with higher ratings among residents than board-certified physicians. Residents assigned significantly lower scores for source indication, source currency, and communication of uncertainty. Between-group differences were observed across multiple DISCERN items, consistent with systematically more critical appraisal by experienced specialists. Participants reported moderate to high trust and willingness to consider ChatGPT as a supplementary resource, with more favorable perceptions among younger respondents.
CONCLUSIONS: Clinicians perceived ChatGPT as potentially useful for preliminary orientation and educational support in lymphedema management, especially for less experienced users. Despite not being blinded, lower ratings in evidence transparency and uncertainty communication, particularly among experienced specialists, suggest that current artificial intelligence outputs should not be used as stand-alone guidance. Future work should test clinically integrated, citation-grounded workflows in prospective settings and evaluate whether they improve decision quality and efficiency.
PMID:41950349 | PMC:PMC13060743 | DOI:10.2196/80553

