Generative AI for family medicine: friend or foe?
Carmen Wong 黃嘉雯
HK Pract 2023;45:33-34
ChatGPT is the talk of the town, so what is all the fuss? If you are
an ‘early adopter’ you are likely already experimenting prompts whilst the
‘laggards’ amongst us are voicing suspicion and caution. Many of us like
somewhere in the middle.1 Generative artificial intelligence (AI) develops
algorithms in creating new content by piecing together relevant data, in
this case text and ChatGPT is a natural language processing tool driven by
generative AI.2 The result is human like communication in responding to
queries and the ability to complete tasks with simple prompts. The breadth
of its language abilities is impressive and can configure responses to different
styles or expression almost immediately. The speed in writing emails,
explaining concepts and writing essays takes seconds rather than the usual
minutes or hours. Since its launch in November 2022, it has transformed our
thinking about the potential of generative AI but also raises serious questions
on how humans make sense of the processing power which far outperforms
us on the vast amount of information out there.
In the educational field the utilisation is in leaps and bounds. Teachers
are utilising ChatGPT responses to inform course outlines, learning outcomes,
assessment rubrics. Initial resistance has yielded to optimistic possibilities
in partnering with the progressively evolving technology and in seeing AI as
students’ study buddy offering guidance and direction in ambiguous spaces
and for teachers as curriculum co-designers.
Surprisingly generative AI is in fact not new, but the explosion of
interest was the ability to connect to users directly. As a physician, you
are probably familiar with the ability of algorithms in analysing images
from X-rays, MRIs and retinographs to aid diagnosis. We have often been
in situations in which we ridiculed erroneous comments on ECGs whilst
being reassuringly confident when diagnoses are aligned. As generative AI
develops in handling images, the value of a tool which can scrutinise the
microscopic features of images coupled with the experience of correct and
mis-diagnosis from a thousand experts, or can summarise the evidence base
(and disparities) relevant to our queries at the tips of our fingers. This can
potentially change our practice and health care systems. Think further in how a tool could draft patient advice and information
to different educational levels on different concerns,
personalised care could be delivered more effectively.
Imagine how a tool in summarising all the desired key
points and observations across previous consultations for
writing medical report or referral letters could help you
save time. Time is our most valuable asset. Staying up
to date can be easy as the ability to summarise the latest
guidelines and recommendations on a particular topic
needs only seconds. ChatGPT has shown the potential
in performing United States Medical Licensing Exams
(USMLE)3 and Stanford is developing BioMedLM
(previously known as PubMedGPT) AI training on
biomedical abstracts and papers.4
Personally, I am relieved that generative AI is on
the horizon in this age of information. Back in the early
days of my medical studies (1996), the main source of
information were textbooks and lecturers’ notes. Journal
articles were available in libraries or on request. The
swell of information on the internet after graduation had
been somewhat manageable by search engines e.g. google
and medical databases e.g. dynamed etc. However, as I
slowly grew accustomed to the avalanche of information
flooding family medicine, I became increasingly
concerned of how to stay afloat. Thus, AI, to me is a
much needed friend or rather a handy personal assistant
working quietly, sifting, sorting and recalling information
just as I need it.
Things are rarely so ideal and we are right to be
cautious as there are several issues to attend to. The
primary concern is trust. No system is ever fail-safe
even when fed with the right input and extensively
trained and tested, errors exist. Since the public
testing of ChatGPT, there has been many incidences
of ‘hallucinations’ in which responses sound plausible
but are factually incorrect or unrelated.3 There will of
course, be implications for our professional realm in
both legal and ethical issues as generative AI has trouble
deciphering context, nuances and prejudices. Managing
complexity is still what humans do best. Instinctively
we know this, as we tackle complexities in family
medicine daily: explaining diagnoses to those with poor
understanding of their medical conditions, in managing
multimorbidity and in navigating hidden agendas.
I remain hopeful that AI may reduce the tedium
of administrative work and salvage precious minutes in
patient interaction or in chasing the elusive work life
balance for our own health. My verdict: Generative AI
is here to stay, perhaps too early for a warm embrace but
not too late for a firm handshake.
This editorial has been written without the use of AI
generated text or content.
References
-
Rogers, E. M. & Shoemaker, F. F. (1971). Communication of Innovation.
New York: The Free Press.
-
Open AI (2023). ChatGPT [Large Language model]. https://chat.openai.com/
chat
-
Kung TH, Cheatham M, Medenilla A, et al. (2023) Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language
models. PLOS Digit Health 2(2): e0000198.
-
Bolten E, Hall D, Yasunaga M, et al. BioMed LM. Available at: https://crfm.
stanford.edu/2022/12/15/biomedlm.html
Carmen Wong,
BSc (Hons), MBBCh (UK), DRCOG (UK), MRCGP (UK)
Assistant Dean (Education);
Associate Professor in Family Medicine and Medical Education,
The Jockey Club School of Public Health and Primary Care, Faculty of Medicine, The Chinese University of Hong Kong
Correspondence to:
Prof. Carmen Wong, 4/F, School of Public Health and Primary Care, Prince of Wales Hospital,
Shatin, Hong Kong SAR.
E-mail: carmenwong@cuhk.edu.hk
|