issue Spring 2023

A Brave New World (?)

By Judy Masterson
Photo by Michael R. Schmidt

Artificial Intelligence (AI) chatbots like OpenAI’s ChatGPT could help unfetter the future of education, work and practice. Powered by algorithms, the machine-learning programs are trained to identify patterns, generate outputs, and improve those outputs through data and feedback. Vice President for Interprofessional Education and Simulation James Carlson, PhD ’12, PA-C ’01, CHSE-A, a 2021–22 AMA Health Systems Science Scholar, foresees a future in which generative AI augments and supports learning and practice, reduces burnout, and improves human health and well-being.

Helix: AI chatbots use artificial intelligence to understand questions and deliver answers. How is AI technology driving healthcare delivery?

JC: AI is just one of a number of trends that will revolutionize the way that we take care of patients and the way patients seek and expect care. Another is the internet of medical things — devices like wearable sensors, remote glucose monitors, pulse oximeters — that collect information on specific behaviors or physiologic parameters that we can use to make treatment decisions. Presently, health care is really good at gathering large-scale data that is captured and stored in the cloud. What we’re not as good at right now, but that I think AI will improve, is interpreting that information. AI can help harvest that data, interpret it, and send recommendations back to healthcare professionals and their patients. AI will help us to rethink how we do our work. I think it will augment, not replace, our decisions for the better — if we learn how to integrate it into our decision-making.

How might faculty incorporate tools like ChatGPT, which some educators and systems are banning or restricting?

The genie’s out of the bottle. Generative AI tools will cause a shift in how we educate at all levels. We need to teach ourselves and our students how to appropriately utilize information, and one way to do that is to set the device down and engage in conversation. We still teach at the bedside and still ask the question of the learner, “What do you think is going on with the patient — and, more importantly, why do you think that?” ChatGPT can provide information, but it’s up to us to interpret that information in a way that is meaningful for our patients. AI will provide us with more refined information, but humans are still responsible for critical thinking and applying information in novel circumstances. Humans are still the explainers. As teachers, it means probing further and asking students if AI tools are delivering any new or alternate information of value, and how that information applies to a specific patient or clinical question.

You said you’re teaching students to keep “gee-whiz technologies” in perspective.

Yes, to keep anchored on why they’re using them, or how they anticipate they will use them and how the technologies will help achieve the Quadruple Aim of Health Care. They need to ask: How does this improve the health of my patient or the general population? How does it reduce the cost of care? Does it actually help us be more efficient and effective at prediction and keeping people well? How does it help the patient experience? Does it translate to better decisions that prevent illness and keep them out of the hospital? Does it build the well-being of the care team by improving workflow or reducing the cognitive load associated with data analysis and complex decision-making?

How might AI help build care team well-being?

Numerous studies show a lot of care team well-being issues — that burnout is real. Health care on the front lines is a rewarding profession, but it’s also a demanding profession. One of the big factors is the electronic health record, where we’re inputting data. A recent study showed that for every one hour of patient care, many clinicians are charting for two hours. There’s a decreased satisfaction and increased burnout with that. AI is already being harnessed in products that record patient/provider conversations and use that information to populate the medical record. It will take time for experienced healthcare professionals to trust AI, and it will take time for our patient populations to trust AI. We’re not there yet, and AI’s not perfect. But when we learn how to use it and recognize that it’s a tool that has value in certain areas, it will free up our time to allow us to spend more quality time communicating with our patients, which leads to greater satisfaction for them and for us. In the near term, if AI and data tools can help us improve workflow and allow us more time to connect with our patients, burnout will likely decrease.

Can you expand on how you see AI tools sparking positive changes in lifelong learning and practice?

We’re in an interesting place. Historically in most of our healthcare professions, we spend so much time mastering information, much of which will change throughout our careers due to scientific advances. Patients rely on that foundational training, the pattern recognition we learn over thousands of patient cases, and the accuracy we gain through experience. AI is not replacing that — yet. But we can’t know everything and we shouldn’t try to know everything. We can use generative AI to fill the gaps, to present the most recent data on the best treatment targeted specifically to that patient — without having to keep up on every single journal article or new medicine that comes out. What does that mean for our roles? I think it’s rather freeing. It’s giving us tools in a less cognitively burdensome way. We don’t have to be the expert in the room on all things. We just need to know how to use AI to inform the decisions we make and customize for a particular patient based on the most recent and available information. AI can also help us collaborate around the knowledge and skills of other members of the healthcare team through accessing a larger network of information and practice patterns across professions, exposing us to new ideas or ways of approaching a problem.

How is AI transforming clinical decision-making and what does that leave for the practitioner?

Generative AI is being trained to offer clinical decision support that could help us in areas where we are still struggling, like missed and delayed diagnosis, which every year affects more than 12 million people and costs us about $1 billion. These are novel and emerging technologies — not new, but finally getting to a place where they are accessible enough and accurate enough to easily integrate into the regular care of patients. AI tools have the potential to help us more accurately diagnose and treat a specific patient, but the healthcare professional needs to sit with that patient and coach them through the choices and decisions that are the best fit. AI could probably ask questions and even diagnose the condition, but it can’t read the emotion that a patient is sharing. It has no awareness of the patient’s specific home or social circumstances — their social determinants of health. AI will probably be good at recommending treatment options, but will it be able to help a patient decide which specific option to choose through a balanced discussion around risk vs. improvement?

We discussed some of these topics in the final session of our Foundations of Interprofessional Practice course last February. We asked our students to think about what these new technologies mean for their professional roles and responsibilities, for their relationships with their patients and the members of their healthcare teams. The answer is we don’t know for sure — yet. But we do know that AI tools are impactful but won’t always be right. We still need to prove the information AI places in front of us. And while AI is going to move forward really quickly, our patients will still need and seek that human touch, that human and healing interaction.

Dr. Carlson ran a test of ChatGPT’s abilities by typing in a command to “write a 400-word column on how artificial intelligence will impact diagnostic and therapeutic accuracy.” The result, he said, “came back in about two minutes.”

RESPONSE FROM CHATGPT

Artificial intelligence (AI) has the potential to revolutionize healthcare, especially in the field of diagnostics and therapeutics. By analyzing vast amounts of data, AI algorithms can help identify patterns and insights that may be difficult for humans to detect. This has the potential to improve accuracy, reduce errors, and ultimately lead to better patient outcomes.

One area where AI can have a significant impact is in medical diagnostics. Machine learning algorithms can be trained to identify patterns in medical images, such as X-rays and MRIs, that may indicate a specific disease or condition. This can be especially helpful in cases where the diagnosis may be difficult for a human radiologist to make, such as in cases of early-stage cancer or rare conditions.

AI can also help improve the accuracy of laboratory tests. By analyzing data from a patient’s bloodwork, for example, an AI algorithm can help identify patterns that may indicate a particular disease or condition. This can be especially helpful in cases where the diagnosis may be difficult to make based on traditional lab tests alone.

In addition to diagnostics, AI can also have a significant impact on therapeutics. Machine learning algorithms can be used to help identify the most effective treatment for a particular disease or condition based on a patient’s unique characteristics, such as their genetics and medical history. This can help reduce the risk of adverse side effects and improve overall treatment outcomes.

AI can also help improve medication adherence, which is a critical factor in the success of many treatments. By analyzing data from wearable devices or other sources, AI algorithms can help identify patterns that may indicate a patient is not taking their medication as prescribed. This can help healthcare providers intervene and provide additional support to help the patient stay on track with their treatment.

Despite the potential benefits, there are also some challenges to implementing AI in healthcare. One of the biggest challenges is ensuring that the algorithms are accurate and reliable. This requires access to high-quality data and rigorous testing and validation processes.

Another challenge is ensuring that the use of AI in healthcare is ethical and transparent. This includes ensuring that patient privacy is protected and that healthcare providers are able to explain how the algorithms are making their recommendations.

In conclusion, the impact of AI on diagnostic and therapeutic accuracy has the potential to be profound. By analyzing vast amounts of data and identifying patterns that may be difficult for humans to detect, AI algorithms can help improve accuracy, reduce errors, and ultimately lead to better patient outcomes. However, it is important to ensure that the use of AI in healthcare is ethical, transparent, and grounded in the principles of patient-centered care.

From left, Norman Paschal, an 69ɫƬ simulations operations specialist, with Dr. Carlson at the Center for Advanced Simulation in Healthcare on the 69ɫƬ campus.

Commentary from Dr. Carlson on column:

ChatGPT did a sound job writing a column with known information where AI has demonstrated efficacy on reviewing medical images and lab tests.

The chatbot, appropriately and surprisingly, brought out the potential ethical challenges related to the role of AI in patient care. However, it played it safe and lacked any opinion or provoking thoughts on the matter. Just raised the issue.

Overall, the ChatGPT column presented the facts as to what is known about the topic but did not offer meaningful or thought-provoking insight that made me (the reader) deeply consider the topic. It brought up the facts as to what is known or what has been proven on the topic, but it lacked the creativity or perspective to hypothesize what might be possible for the technology in new ways. It was a retrospective look that clearly focused on the available data that could be analyzed but lacked the vision for how AI could be applied to new or different types of patient encounters or situations. That level of creativity appears to remain uniquely human … for now.

PAST ISSUE
Stories