Artificial intelligence (AI) has enormous potential when providing clinical decision assistance with tools like ChatGPT. However, legitimate concerns have been raised regarding introducing bias and perpetuating health inequities via machine learning and generative AI.
ChatGPT (generative pre-trained transformer) is a new player in artificial intelligence. ChatGPT, created by OpenAI, is a large language model that uses massive data to replicate human interaction.
In a recent study, researchers assessed ChatGPT’s ability to engage in clinical reasoning by evaluating its performance in the U.S. Medical Licensing Exam (USMLE). Surprisingly, the model achieved a passing grade of 60% accuracy without any prior reinforcement or training.
However, it is crucial to understand the limitations of this achievement, as pointed out by John Halamka, MD, MS, president of the Mayo Clinic Platform. During an “AMA Update” episode, Dr. Halamka emphasized that generative AI is not equivalent to human thought or sentience.
Inherent Limitations of Generative AI
One immediate area where AI can alleviate clerical burdens is administrative tasks. Dr. Halamka shared his experience using ChatGPT to draft a news release. Although the initial draft was eloquent and compelling, it contained factual errors that needed correction. Nevertheless, the assistance provided by ChatGPT enabled him to complete the task in just five minutes instead of the usual hour.
Dr. Halamka predicts that in the short term, AI will primarily generate text that humans can subsequently edit to ensure accuracy. This approach significantly reduces the burden on human resources, which is particularly valuable amidst the ongoing staffing crisis in healthcare.
Interpretability and Accountability
While AI shows promise, caution is essential when diagnosing complex medical conditions using models like ChatGPT. Many generative models are trained on popular materials that may contain misinformation or purposeful disinformation instead of rigorously peer-reviewed scientific literature. Dr. Halamka expressed his desire for future iterations of AI to be trained on extensive, anonymized patient data to provide more reliable insights.
However, he highlighted the potential for AI to transform medical practice for physicians in the future. Physicians can dedicate more time to patient care by leveraging AI to alleviate administrative burdens. Dr. Halamka envisions the next generation of doctors as knowledge navigators rather than mere memorizers.
If the current generation reduces the load by 50%, future physicians will experience increased joy in their practice.
Sources:
https://www.ama-assn.org/practice-management/digital/why-generative-ai-chatgpt-cannot-replace-physicians
https://www.ama-assn.org/practice-management/digital/augmented-intelligence-medicine
https://www.ama-assn.org/delivering-care/health-equity/feds-warned-algorithms-can-introduce-bias-clinical-decisions
https://www.ama-assn.org/practice-management/digital/chatgpt-passed-usmle-what-does-it-mean-med-ed
https://www.ama-assn.org/practice-management/digital/qa-telehealth-here-stay-doctors-key-requirements-remain
https://www.ama-assn.org/practice-management/digital/chatgpt-and-ai-integration-health-care-john-d-halamka-md-ms
https://www.ama-assn.org/series/ama-update
Recent News

November 07, 2023
UT Arlington’s Smart Hospital: Innovations and Advancements
The University of Texas at Arlington (UT Arlington) innovative hospital has […]
Read More
October 22, 2023
Healthcare’s 2023 Dilemma: Staffing Gaps
The healthcare sector in the US is grappling with a significant […]
Read More