By Nathan Gunn, Contributor, Fierce Healthcare | August 28, 2019
The U.K.’s National Health Service announced in July a partnership with Amazon’s Alexa, giving the digital assistant permission to provide medical advice. The goal of this partnership is to reduce stress on providers by helping patients get the medical information they need through Alexa in circumstances that don’t require a physician.
For that reason, it’s not surprising that shortly after tech giants decided to add healthcare to their digital assistants’ repertoires (Alexa added HIPAA-compliant healthcare “skills” in April), the assistants aren’t fully up to challenges presented by the complex world of health and medicine. A recent study that compared these digital assistants found two out of three struggled just to recognize medication names.
RELATED: Healthcare network rolls out voice-enabled digital assistant across 1,500 practices
Doctors are highly trained, highly educated and highly experienced in their practice of medicine. Tacking clinical “skills” onto consumer-facing digital assistants cannot possibly match their expertise. That’s not to say we should write off the potential of these technologies, though. It’s clear from these products, and others like them, that pairing voice technology with machine learning and artificial intelligence is a powerful combination with great potential across a variety of industries.
But to harness the power of these technologies in the medical field, we need to send Alexa to medical school.
Why digital assistants need medical training
Medical training has three important stages that prepare students to become doctors. These same stages can be a guide for designing and building technology solutions like digital assistants for use in medicine.
In the first stage of medical school, students learn the foundational language, concepts and sciences of medicine—biochemistry, anatomy, genetics and physiology. This preclinical work is essential before future doctors begin managing the day-to-day workflow and complex clinical challenges that come with practicing medicine.
RELATED: Doctor Alexa will see you now: Is Amazon primed to come to your rescue?
Similarly, when building technology for the medical field, we have to first teach the basic concepts and language of medicine. In the case of digital assistants, this is done by designing voice technology to understand medical vernacular and developing machine learning algorithms to accurately translate that language into digital data.
In the second stage of medical school, students begin rotations in hospitals and clinics. This stage contextualizes the basic science and facts students learned in the classroom by building their familiarity with clinical settings and the practical art of medicine. Armed with academic knowledge, this is the stage when students begin learning to apply it in clinical practice.
When designing technology, this is perhaps the most important stage. It’s one thing to create a digital assistant that can understand medical terminology and instruction; it’s another to use that understanding to effectively complete tasks in clinical settings.
RELATED: Analysis: Why Alexa’s bedside manner is bad for healthcare
For digital assistants, this means training them on the workflow and contextual aspects of its intended setting—whether designed for patients or physicians in a clinic or operating room. Put in the simplest terms, even the brightest student adds no value to a practice if they get lost on their way to the exam room or don’t know how to write a cogent clinical note.
In the case of a digital assistant for physicians, this may require teaching it the diagnostic process, how an electronic health record functions, or how to recognize patterns in patient health data. It’s at this stage that a digital assistant must not only understand terms but also begin to use artificial intelligence to distinguish between two similar terms based on clinical context, such as “peroneal” and “perineal”.
Finally, upon completion of medical school, students begin their residency. Here, they apply medical knowledge and clinical lessons learned throughout earlier rotations under the guidance of senior physician-teachers. A smart, hardworking medical student is an invaluable assistant for physicians, but there’s no substitute for the real-world application of clinical concepts in developing a fully trained physician.
RELATED: Health systems launch new HIPAA-compliant Amazon Alexa voice tools
For technology solutions, a residency can be compared to initial pilots or the first commercial rollout. This is the time to test in the field, implement it in practice and measure and evaluate results. Digital assistants use machine learning to get smarter, more personalized and more accurate with repetition. That’s why it’s critical to test these solutions through an increasing volume of users in a variety of roles and settings. A digital assistant must learn the hundreds of different ways someone may talk about having the symptoms of a cold, for instance, and they can only do so by working through thousands of patient interactions.
Digital assistants are already ordering people’s groceries, checking off our to-do lists and making millions of lives easier each day. While digital assistants are never going to replace physicians, just like a well-trained medical student they can be invaluable for doctors and patients alike. They just need to go to school first.
Nathan Gunn, M.D., is the chief operating officer at Suki, a digital assistant designed for doctors by doctors. After receiving his medical degree from the University of California, Davis, Dr. Gunn completed his residency in internal medicine at the University of California, San Francisco.
But is turning to Alexa, or its contemporaries Siri and Google Assistant, the solution to relieving an overwhelmed health system? There’s no doubt these highly successful digital assistants offer consumers a wide range of voice-enabled features, tools and capabilities.
Yet, none of them were built for medicine.