How AI will – and won’t – change health care in 2024
Muhammad Mamdani understands why people are wary of artificial intelligence having a say in their health care – but he’s even more concerned about the patients who are waiting to benefit from the potentially life-saving benefits of AI-assisted medicine.
As vice-president, data science and advanced analytics at Unity Health Toronto, Mamdani has overseen the implementation of more than 50 AI-powered solutions into clinical practice – from an early warning system that uses electronic medical records to predict a patient’s risk of death or requiring intensive care, to a brain-bleed detection tool that can help fast-track access to critical treatment.
And he says there’s more to come in 2024.
“I hope to see more AI being used for clinical decision making,” say Mamdani, who is a professor in the department of medicine in the 91Թ’s Temerty Faculty of Medicine and the director of the (T-CAIREM).
Yet, despite AI's potential to transform patient care, it isn’t a cure-all for the underlying problems in Canada’s health system, warns Mamdani, who holds cross-appointments in 91Թ’s and the at the .
Mamdani recently spoke to 91Թ News about how AI will – and won’t – shape health care in 2024.
How do you expect AI will transform health care in 2024?
For the past few years, we’ve been in this era of AI hype in health care. A lot of talk, some people doing a few small things here and there, but not really a big splash – and I’m not sure we’ll see a big splash in 2024. A lot of organizations are actively getting into this space, but I would say we’re still at least a few years away from seeing really, really big changes. Instead, I think we’ll see a more gradual adoption of AI in health care.
In 2024, I hope to see more AI being used for clinical decision making. Right now, we’re seeing it used more for non-clinical or administrative tasks. For example, quite a few primary-care clinics and outpatient clinics are using AI scribes that can “listen” to a conversation between a doctor and a patient, transcribe the visit and provide a really good summary note.
[Doctors] are notorious for not writing everything down, and that’s very unfortunate because medicine is very data- and information-driven. When a doctor is talking with a patient, they’re focused on the patient – as they should be – but when the patient leaves, they might have forgotten many of the things that were discussed or didn’t have time to write things down. Then you have an imperfect data set the next time around.
We’re also starting to see tools that can take these transcriptions to suggest diagnoses or recommend medications and, with the doctor’s OK, send prescriptions to the pharmacy.
This coming year, [at Unity Health], we’re working on creating a multimodal data environment that incorporates not only clinical data, but also medical imaging data and waveform data from monitors and ventilators that we’re able to access in real time. For example, you could go into the ICU and constantly ingest data from ventilators to understand if a patient is going to have trouble breathing in the next 20 minutes.
What are some of the ways AI could improve patient care?
The potential is massive for patient care in several areas. One is around chatbot-style solutions where you can ask questions about health-related issues. There are many [clinics] now where you can go on to a website and say, “I have these symptoms. What do you think?”
The other area that I think will probably be more useful is around continuity of care when a patient leaves the hospital or clinic. Oftentimes, patients complain that they don’t have enough information or it wasn’t explained to them what to do next. You’re in this institution undergoing all of these tests and procedures, then when you leave, the doctor tells you all of these things you need to do, and you’re basically on your own – and you may not remember half of what you were told.
Poor post-discharge communication and management is one of the reasons we see a lot of patients being readmitted to hospitals. What if we had an AI chatbot that could stay in contact with the patient, summarize their treatment plan, answer their questions and tell them to call their doctor when necessary?
What are the most significant challenges you foresee in implementing AI technologies in health-care settings?
We should temper our expectations for AI, because when you deal with a health-care system, you have to try to solve the system problem first and use technology to enable appropriate solutions.
Take, for example, the problems we see because of a lack of information sharing between health providers. AI is only as good as the data it’s given, so if a patient goes to hospital X for a problem that was treated at hospital Y a month earlier – but the two hospitals don’t talk to each other, [hospital X’s] AI will be blind to what happened at hospital Y.
As a province, if we got together and enabled these data sources to talk to each other in realtime, AI would be way more powerful.
What are some of the ethical considerations that need to be taken into account when deploying AI in health care?
Obviously, you have to have a robust environment to protect privacy and security for patients. But at the same time, you have to have a progressive data governance framework that allows that data to be accessed by the people who need it.
Another concern is making sure your algorithms perform well among various subgroups. For example, does it perform just as well among young versus old, sick versus not sick, males versus females? The problem is we don’t have data on all these subgroups. So how do we know that our algorithms perform just as well on one race versus another or across all genders when we don’t have such data readily available?
The other challenges are going to be scaling the solutions from one hospital to another or to an entire system. Patient care and processes may differ considerably and AI solutions may need to be tailored to local context. Further, while these AI solutions are really exciting, they can be very expensive. So who pays for them?
At Unity Health, we’ve deployed more than 50 AI solutions into clinical practice, with more going live soon. Other hospitals should have these kinds of tools, but not all of them have the resources to develop and deploy AI solutions and patients are suffering as a result.
What would you say to people who are apprehensive about “Dr. AI?”
That kind of apprehension is very much justified. I get it. There are going to be some failures as well as some successes. But I don’t think this is going away. The potential benefits are far too great to ignore. We need to deploy AI in health care thoughtfully and responsibly. AI is here and it will permeate health care – how it permeates is yet to be determined.