By Paddy Padmanabhan, CIO | June 5, 2019
Voice is the most obvious next step of user interface that is going to radically change the way we interact with technology in healthcare services. You ain’t heard nothin’ yet.
Amazon made news recently when details of its proposed emotion-sensing wearable device came to light. Interestingly, the device is being developed by the Alexa team who is using voice-recognition technology to detect the emotional state of the consumer. This is not the first-time voice-recognition is being used to diagnose a medical condition. Studies have established the use of voice in detecting early stages of Parkinson’s disease and it is logical for the same technologies to be used to detect and treat a range of conditions.
Voice is potentially the next wave in the UI and technology platform shift, after the web and mobile waves, in the past two decades. Indeed, voice may be the next battleground for big tech firms as consumers increasingly use one of the dominant voice interfaces to access information, much like how Google’s text-based search engines dominated the consumer’s attention for years. Amazon, with reportedly over 10,000 associates on its Alexa team, is not alone in turning voice-recognition technology towards solving healthcare problems. Besides Amazon, Google (Google Assistant), Apple (Siri), and Microsoft (Cortana) are also investing billions in voice-based personal assistants to gain a dominant position in the market (or at least not be left behind).
However, consumer-grade voice-recognition services such as asking for a restaurant recommendation does not translate to deployment of the technology in a healthcare context. Healthcare is bound by HIPAA data privacy rules which govern what information can be shared, with whom, and how. Amazon recently released a set of Alexa “skills” that transmit and receive protected health information and is running pilots with a handful of health systems. While many of the skills are relatively mundane services, such as status updates for prescription refills,these pilot projects set up a test bed for secure communication for a range of complex care management protocols down the road.
Dwight Raum, CTO of Johns Hopkins Medicine, says that voice is the future in healthcare delivery, and believes that “Voice is the most obvious next step of user interface that is going to radically change the way we interact with technology.” Let us look at a few scenarios of how this can play out in healthcare:
Access to healthcare services
We are fast approaching a time when consumers will want their healthcare delivered at a time, place, and manner of their choosing (in some cases, even before they realize they need it, if AI algorithms have their way). North Carolina based Atrium Health’s Alexa pilot is using voice recognition to help customers identify a nearby urgent care center and get a same day appointment. This will significantly improve access and drive consumer satisfaction, not to mention the bottom-line impact from increased revenues for the health system.
Caregiver productivity
There have been ample studies about how the advent of electronic health record (EHR) systems and technology in general have increased the workload for physicians and other caregivers. We are at a point where the role of the next wave of technologies is expected to reduce the burden of caregivers, and voice fits neatly into the picture. John Kravitz, CIO of Geisinger Health System, believes that “it is one of those areas where, in our hospital setting, we want to have the ability and hopefully take some of the stress off our nurses, where a patient can speak to an Alexa or Google Home type voice- enabled device. Nurses should and be able to interact with the devices and hopefully be able to serve our customer population more effectively.”
Customer experience
Delivering superior experiences is one of the primary goals of any digital transformation program. As we enter the era of zero-UI technology where a touchscreen is replaced by a natural language interface such as voice, technology starts to become less front and center and dissolves more into the background of our everyday experience. The Alexa skills described earlier will achieve that in the near term for a range of simple tasks. However, over time the software will significantly improve customer experiences. As the voice-recognition software becomes more sophisticated, with the help of AI and machine learning, it will adjust to accents and a broader range of terminologies. The latter aspect becomes critical in a medical context as clinical terminology is progressively built into the lexicon of the interface.
If there is a major roadblock to increase the use of voice-enabled services, it would be concerns around privacy, especially if the data on voice-based interactions is stored in a cloud by one of the big tech firms. Concerns about Amazon’s Alexa “snooping” on consumers is also likely to give pause to efforts to expand on the use of the technology in general until sufficient privacy safeguards are established. Over time these concerns will be addressed, much as concerns about patient data storage in cloud infrastructure is no longer an issue today. However, the key is that voice is already a part of consumers’ lives for a variety of other services, such as banking and retailing.