J Hand Microsurg. 2025 Jul 14;17(5):100326. doi: 10.1016/j.jham.2025.100326. eCollection 2025 Sep.
ABSTRACT
BACKGROUND: Large language models (LLMs) such as ChatGPT are artificial intelligence programs designed to interpret and respond to text based input These programs can improve output based on prompting and tailored prompt engineering. Multiple studies have assessed the ability of various LLMs to perform on medical exams at different levels of training. The newest version of ChatGPT, GPT-4, allows image recognition which is relevant for many questions on orthopedic surgery exams. Performance of GPT-4, and the potential for LLMs to learn from prior exams remains unclear. The present study analyzed ChatGPT-4 performance on the 2023 hand surgery Maintenance of Certification (MOC) Self-Assessment Examination (SAE) before and after prompting with 5 previous versions of the test. It was hypothesized that GPT-4 would pass the exam and improve performance after prompting.
METHODS: GPT-4 was tested with all text and image-based questions from the 2023 hand surgery SAE. Video-based questions were excluded. GPT-4 was then provided with questions, answers, and explanations from 5 previous SAEs from 2014 to 2020 and retested on the 2023 SAE text and imaging questions. Responses from GPT-4 on prompted and unprompted tests were recorded and compared.
RESULTS: Both prompted and unprompted versions of ChatGPT-4 exceeded SAE exam passing requirement of >50 % correct response rate. GPT-4 answered 67 % of all questions correctly unprompted and 71 % of all questions correctly after prompting (p = 0.51). Sub-analysis demonstrated GPT-4 answered 66 % of image-based questions correctly after prompting, compared to 56 % before prompting (p = 0.25). GPT-4 answered 75 % of text only questions correctly before prompting and 74 % correctly after prompting (p = 1.0). Fischer's exact test on total questions, image only, and text only showed no statistically significant differences between prompted and unprompted versions of GPT-4.
CONCLUSION: GPT-4 demonstrated the ability to analyze orthopedic information, answer specialty-specific questions, and exceed the passing threshold of 50 % on the 2023 Hand Surgery Self-Assessment Exam. However, prompting GPT-4 with previous SAEs did not statistically significantly improve performance. With continued advancements in AI and deep learning, large language models may someday become resources in test simulation and knowledge checks in the realm of hand surgery.
PMID:40708759 | PMC:PMC12284659 | DOI:10.1016/j.jham.2025.100326