November 30, 2023

Article
3 min

CHIME23: How Can Healthcare Balance the Reward and Risk of AI?

In a keynote panel discussion at the event, health IT leaders discussed the benefits, risks and potential of artificial intelligence in healthcare.

Generative artificial intelligence has been the biggest buzzword in healthcare this year by far. However, while most are truly short-lived and unrealized trends (blockchain, for example), consensus from health IT leaders at the CHIME23 Fall Forum in Phoenix seemed to be that generative AI is different.

Despite unanswered questions related to the risks, ethics, legality, security and governance of generative AI (and the larger realm of AI in general), the potential to improve workflows, patient engagement and patient outcomes has the industry excited about the innovation. Healthcare organizations are diving into the generative AI space more quickly than usual for healthcare. Some are hoping to get ahead of the fervent interest from researchers and clinicians, while others seek to create efficiencies for their organizations.

To address the buzz around generative AI and other AI tools, health IT leaders at CHIME23 discussed its benefits, risks and unprecedented potential in a keynote panel discussion sponsored by CDW.

AI Adoption in Healthcare Requires Responsible Change Management

“With generative AI, you really see AI become part of the team and the healthcare delivery space,” said Cleveland Clinic Interim CIO Sarah Hatchett. “Generative AI enables all of us to work at the top of our licenses. It takes away repetitive tasks, and we become, in a co-pilot sense, editors of drafts.”

Generative AI can be used in healthcare to draft responses to patients and even communications with insurance companies, such as requests for prior authorization.

Hatchett emphasized that, as a CIO in healthcare, it’s important to educate executive teams to understand the language that defines AI and to cut through the hype so they can set a vision and strategy. However, it’s also important to educate clinicians, staff and patients.

“It’s important that we’re not just educating the executive team, but that we also educate and activate the masses. Innovation truly comes from the grassroots efforts,” she said, adding that the use of generative AI in native workflows to gain productivity advances will come with clinical and operational challenges, making change management critical for identifying and addressing those issues so organizations an continue to provide safe, high-quality care.

Aaron Miri, senior vice president and chief digital and information officer at Baptist Health in Jacksonville, Fla., agreed with Hatchett on the importance of change management as healthcare adopts AI tools. When his hospital deployed robots to deliver items for nurses, clinical staff members feared their jobs were at risk. Miri said the organization assured them that the robot was there to free them to work at the top of their licenses.

“Understanding and respecting that fear factor is critical. Safety and security are very important. Teaching the value proposition in a responsible manner is critical,” he added.

Balancing the Benefits and Risks of AI Use in Healthcare

AI can help the healthcare industry take huge strides to relieve the burden on clinicians and healthcare staff. That should be healthcare’s first priority where AI is concerned, says Dr. Anthony Chang, a pediatric cardiologist who serves as chief intelligence and innovation officer and medical director of the Heart Failure Program at Children’s Hospital of Orange County in California.

He reminded attendees of the 2016 Go matches between Google’s AI-powered program AlphaGo and top Go player Lee Sedol. The AI defeated Lee in four of five matches. According to Chang, in one of the matches, the AI used a move that no human would think to use. Lee copied that move to beat the AI in a later match.

“What I see for the future of AI in healthcare is to push clinicians to a higher level of performance,” he said.

However, Chang emphasized the need for more AI education so that healthcare can reap the benefits of AI.

“The AI healthcare agenda is driven by human-to-human relationships,” he said, adding that clinicians need to learn enough about AI to be conversational. “Ideally, we need tens of thousands of clinicians to be bilingual, to speak healthcare just as easily as they speak AI. It could take five to 10 years before we get to that point.”

Chang said healthcare organizations need to find two to three convincing use cases for clinicians to come on board.

Healthcare has learned lessons from past AI initiatives (not generative AI) used in clinical settings. A well-known use case is the AI model developed to catch sepsis symptoms earlier than traditional methods. Miri said that while those models were flawed, the industry iterated.

Another example of an early AI use case in healthcare that didn’t work was a radiology algorithm that pointed out possible tumors, Chang said, pointing out that the data set that the model trained on wasn’t applicable to the environments in which it was used.

“AI models are like politicians. They can’t be effective and popular at the same time,” he joked. “Also, if there are claims that a model is 99 percent accurate, be skeptical. You can’t generalize an AI model from one hospital to another and expect the same performance. Either make the algorithm accurate in your own hospital or expect less performance.”

Cleveland Clinic is working on a digital strategy that lays out hot-topic AI use cases such as ambient listening and generative AI to aid Epic In-Basket replies, said Hatchett. As healthcare organizations prepare for the future of AI, she said, it’s important to create a foundation and have a governance framework that includes data connections to build future solutions that don’t currently exist. She believes that the industry needs to expedite governance quickly to avoid stifling innovation.

“The long we take a negative position in this area, we are losing out on innovative ideas. We need to get working guidance in place, so people feel safe to explore the possibilities while still being in compliance,” said Hatchett.

At Baptist Health, Miri said the organization has an AI governance committee that considers the value proposition, clinical benefits and the ethical and security implications of AI use cases. Once implementation occurs, he said, it’s important to share learnings with the community.

“The reality is that it’s up to each of us to do the process efficiency work first and then worry about the technology,” he said. “We’re getting so caught up in the toys and the shiny versus the real work, which is people, process and then technology.”

As Healthcare Interest in AI Grows, So Do Calls for Regulation

In addition to his clinical work and healthcare leadership, Chang also has a master’s degree in biomedical data science and AI from the Stanford University School of Medicine. When he graduated with his degree in 2015, he said, there were few applications of AI in healthcare.

“People were not aware of AI,” he said. “As a practicing clinician, I’ve had the opportunity to be a translator of that knowledge into healthcare.”

Chang founded AIMed in 2017 to address AI adoption in healthcare. He expected about 5,000 attendees at his first conference, but only 400 people showed up.

“People realized it would be big, though, and now we have an annual meeting,” said Chang.

Clinicians, data sciences, AI vendors, investors and other industry stakeholders attend to learn from each other. However, Chang pointed out that there is still a huge gap in AI education. The  American Board of Artificial Intelligence in Medicine offers courses for healthcare executives, clinicians, patients and industry partners at all levels of knowledge.

As interest in healthcare-related AI increases, voices grow louder calling for more thought and action to prevent bias, security vulnerabilities and other potential risks to security, safety and health.

On October 30, President Biden issued an executive order on safe, secure and trustworthy AI. It calls for the Department of Health and Human Services to establish a safety program to receive reports of unsafe or harmful healthcare practices related to AI and to take steps to remedy those practices accordingly.

Miri pointed out that the order focuses on ensuring the federal workforce is formally trained on AI specific to healthcare, that organizations don’t duplicate efforts and that there is transparency.

“The executive order says to stop and see how we can do this responsibly as one unit. What happens is to be determined,” he said.

While it’s impressive how quickly generative AI has been adopted in healthcare, Chang said this is only the beginning.

“It took two decades to get the model T on the road because no one knew how to drive. I think we have the same issue with AI,” Chang said, because healthcare doesn’t have many AI experts in the industry. “Hopefully it doesn’t take two decades but only takes a few years, because patients are not getting the service they need if we delay this.”

Chang believes now is the best time for healthcare to think about embracing AI technology. He described the present as healthcare’s Apollo 13 moment.

“It’s damaged and looking pretty bad to the point that a lot of senior clinicians have left the industry, but the younger generation is coming on in full force,” he said. “More students have applied to medical school in the past year in the U.S. than ever before. We owe it to the next generation to give them an easier time in practice.”

“This could be healthcare’s finest hour,” said Chang.

Story by Jordan Scott, an editor for HealthTech with experience in journalism and B2B publishing.

Jordan  Scott

Jordan Scott

Editor, HealthTech Magazine
Jordan Scott is an editor for HealthTech with experience in journalism and B2B publishing.