By JW Amos
The buzz around artificial intelligence at HIMSS Chicago 2023 was palpable! With around 35,000 people in attendance, you could not walk ten feet without hearing something about ChatGPT.
AI was such a hot-button topic at HIMSS23 that many of the educational sessions were overbooked. Luckily, I managed to save my seat early. But if you were among the few who couldn’t attend one of the AI-themed sessions—or couldn’t make it to HIMSS23 at all—I have you covered.
Here are my five key takeaways on AI from HIMSS Chicago 2023:
- Explainability is a fundamental concern around AI
- The opportunity in generative AI is enormous
- Identifying health AI standards and best practices are key
- GPT-4 often “hallucinates” false responses to queries
- Auto-GPT may be the next powerful AI tool
Explainability is a fundamental concern around AI
I thoroughly enjoyed watching the opening keynote on “Responsible AI: Prioritizing Patient Safety, Privacy, and Ethical Considerations.”
In the 90-minute discussion, Reid Blackman, author of “Ethical Machines”, and Peter Lee of Microsoft debated the ethical morality behind AI, the sky-high popularity of ChatGPT, and the key considerations surrounding AI systems that benefit society.
One of the major concerns Reid brought up was the problematic nature of how most AI applications deliver answers. He lamented that healthcare has extremely high stakes and the opaqueness around how AI models arrive at conclusions is problematic. Reid pointed to our lack of understanding of black box AI’s inner workings, which prevents clinicians from explaining the model results. For example, AI like ChatGPT does not explain why it came to its conclusion on a diagnosis. Not exactly a confidence booster, huh?
The opportunity in generative AI is enormous
Generative AI is a type of artificial intelligence that can be used to create new content, including audio, code, images, text, simulations, videos, and more.
This technology has the potential to improve the efficiency and effectiveness of healthcare by enabling personalized treatments, accelerating drug development, and aiding in medical imaging. By analyzing vast amounts of data on patient demographics, medical history, and genetics, generative AI can help identify the most effective treatments for each patient’s unique condition.
For example, during the “Learn How Generative AI is Reshaping Healthcare and Life Sciences” session, SambaNova Systems demonstrated how generative AI empowers healthcare providers to improve patient outcomes with disease progression modeling. The progression models use data from clinical trials and patient records to identify the most effective treatment strategies for individual patients. In addition, the speakers detailed how generative AI can even detect the “tone” of the conversation between physicians and patients. This would help healthcare providers pick up emotional cues and respond more effectively to their patients’ needs. However, there are concerns around potential cultural biases and privacy.
Identifying healthy AI standards and best practices are key
Establishing robust and clear standards and best practices for healthcare AI applications are essential for ensuring the safety of patients and their data. AI-powered tools have the potential to transform the healthcare industry, but without proper standards and practices, they can also pose significant risks to patient safety and privacy.
Implementing health AI best practices is like giving our digital doctors a Hippocratic Oath. It helps ensure that they “first, do no harm” and provide accurate diagnoses and treatments.
GPT-4 often “hallucinates” false responses to queries
“Hallucination” is a term used to describe the phenomenon of language models generating responses that do not seem to be justified by its training data. It can occur when the model generates text that is based on patterns or associations in the data, rather than a true understanding of the meaning or context.
Peter Lee of Microsoft warned that these types of errors in healthcare have the potential to cause harm. Another panelist, Kay Firth-Butterfield, went as far as suggesting a six-month pause in the development of AI systems more powerful than OpenAI’s ChatGPT 4 because “hallucinations” can potentially lead to incorrect diagnoses, ineffective treatments, and other medical errors.
Auto-GPT may be the next powerful AI tool
Auto-GPT leverages the flexibility of OpenAI’s latest AI models to interact with software and services online, enabling it to create its own prompts and feeding them back to itself, creating a loop. In contrast, ChatGPT is optimized for conversational AI applications. In theory, this means the AI would be performing tasks autonomously!
It’s too early to say exactly how this technology will be applied to healthcare, but the possibilities seem endless as long as the technology is used with an appropriate degree of caution!
I left the conference energized by the potential for artificial intelligence in healthcare. I am already looking forward to next year! If you would like to learn more about how to tackle the above problems by leveraging our healthcare commercial intelligence, start a free trial today!