By O’Ryan Johnson | June 11, 2018
Accenture released a survey of health care executives Monday that shows their longing to embrace AI is matched by their fear of answering for the decisions that it makes.
“If you are using some sort of deep learning neural network, where it is unclear why you have received a particular determination or outcome, that will pose a potential problem in explaining why you took that action,” Brian Kalis, managing director of digital health at Accenture told CRN. “So there is some work that needs to be done now, in the early stages, to think through how you handle that, how you can come up with solutions that can explain why you took an action that was informed by artificial intelligence. How do you actually think through those decisions?”
Based on the responces to their survey, 81 percent of health organizations said they are not prepared to face social or liability issues that would require them to explain their AI-based actions and decisions, Kalis said.
The Accenture Digital Health Technology Vision 2018 surveyed executives at 100 health care organizations – health payers and health systems worth between $500 million and $6 billion — to see which tech trends they saw growing in the next three to five years.
“The intent of why we released this and how it’s used, is as an input to health care executives as part of their overall strategic deciding process,” Kalis said. “So given that this is three to five years out, you can look at these trends and it becomes a way of thinking through your overall strategic time framework, to think through – across business and technology – the implications to your business and your future direction. This year, the overall theme is the emergence of the intelligent enterprise.”
The other issue executives are talking about is using AI to analyze data in an ethical way that doesn’t infringe on the rights of patients or the owners of the data.
“Just because you have information and can generate insights from it, it doesn’t necessarily mean you always should,” Kalis said. “So responsible AI is thinking through the data ethics consideration of things you can do, to determine how and what you should do. A lot of that starts with something like creating a data ethics policy.”
However, AI is only as good as the data it’s processing, so a crucial component of an organization’s technology system becomes data prominence, focusing on where data was created, the overall integrity of data as it gets moved through a data supply chain, as well as data quality.
“As we move into a world of generating insights from information whether its using traditional techniques or more advanced machine learning techniques and deep learning techniques, the concept of knowing where data came from and ensuring its accuracy and quality is critical to avoiding poor insights from that information,” Kalis said. “So if you’re making back office, front office as well as clinical decisions from information, its critical to ensure that information is unbiased and coming from a trusted source to avoid skewing the insights and the decisions you make.”
Among the other tech trends cited by health care executives are extended reality – both augmented reality and virtual reality — frictionless business, and the internet of things. In terms of IoT, hospitals are combining robotics, extended reality and connected devices to create intelligent environments for patients.
“This might include an ICU hospital room that automatically manages patient fluids,” Kalis said in an email to CRN. “For example, Autonomous Healthcare (formerly AreteX Systems) uses machine learning tools that are housed on medical equipment and devices for monitoring patient vital signs and automatically dispensing and adjusting fluid drips for individuals in critical care”