By Bill Siwicki, Healthcare IT News | November 27, 2018
The presence of artificial intelligence-powered IT is growing in healthcare. More and more CIOs and caregivers are finding new uses for powerful AI to help its human users in the delivery of care, both behind the scenes and at the point of care.
Dashboards are popular tools for healthcare executives and caregivers to use to track and measure technologies in action. And AI is no different in this respect, though it is very different from other technologies in its complexity. So some forward-looking CIOs are putting together AI dashboards, or beginning to think about such dashboards and what would make them useful tools.
What AI dashboards track
So what would an AI dashboard track? To some extent, this is going to depend on what a hospital is hoping to accomplish with its deployed AI. For example, are the use-cases targeting inpatient or outpatient metrics, operational functions or clinical.
“That said, there are a few metrics that are likely to be common to AI dashboards in general,” said Dr. Craig Monsen, chief medical information officer at Atrius Health in Newton, Massachusetts. “Usage – in effect, this is how many times a given AI is consulted. AI that never triggers is not doing much to improve patient care.”
Actions taken – this is going to depend to a great extent on how the AI is deployed, but each AI deployment should drive clear decision-making and action, he explained. Thus, tracking the proportion of uses that lead to presumed action – however that is defined – is an important surveillance indicator, he added.
“Stationarity – AI performance theoretically improves over time, but this is only if the data feeding the models are updated to reflect changing practice patterns and operational realities,” Dr. Monsen said. “In fact, performance can decline if AI is not regularly retrained. A metric that indicates performance over time on a validation sample is a good way to ensure that the AI is not getting stale.”
NewYork-Presbyterian is a leader in the use of artificial intelligence in health system operations and was recognized by Fast Company as a top ten user of AI across all industries. It has a number of artificial intelligence and robotic process automation initiatives underway and uses an innovation dashboard to track its AI work.
For each one of its projects, it tracks the project name, the impacted area (clinical operations, finance, revenue cycle, business operations), and the progress metrics. Those metrics include a well-defined baseline state, the targeted goal, and the project-to-date hard and soft outcomes achieved.
At Johns Hopkins Bayview Medical Center, staff have deployed AI currently in their early detection for sepsis in their Epic EHR system and are exploring other areas as well.
“What an AI dashboard should track depends on the question that we’re wanting to answer,” said Etter Hoang, business intelligence development manager at Johns Hopkins Bayview Medical Center in Baltimore, Maryland. “In general, in healthcare today, I believe optimization of clinical and operational practices have the greatest potential for AI where we are looking for patterns and anomalies. Looking at metrics that surround clinical variation, care pathway effectiveness, risk automation and population mapping.”
AI dashboard must-haves
While there are many things an AI dashboard could track, there are certain things that such a dashboard must track. Experts in the field offer their takes on AI dashboard must-haves.
“Must-haves for an AI dashboard would be a clearly defined question to answer,” said Hoang of Johns Hopkins. “With most questions, there is an optimal group – the Goldilocks group – that should be emphasized as AI assists in identifying a prescriptive pathway for answering the defined question.”
Without a clear question, a dashboard would present more information than most people would know what to do with, he said. AI dashboards have to be clear and focused on the prescribed optimization path, he added.
“This means that dashboards need to focus more on the actionable insights rather than reporting KPIs,” he explained. “An AI dashboard can do without data for data’s sake. Data should always be presented in a way that does not confuse the stakeholder who needs to make informed decisions off of the data.”
Most critical to an AI dashboard, as with any dashboard, is a clear statement of the intended user and what decisions the dashboard itself is driving, said Monsen of Atrius Health.
“If the vision is to provide oversight over a set of ‘autonomous’ agents, it may not be so straightforward determining who manages when things appear to be going wrong,” he said. “Clear accountabilities and defined actions such as determining when to ‘pull the cord’ on AI delivering highly suspect outputs would be important non-technical aspects to a successful AI dashboard deployment.”
Atrius Health uses data about its AI deployments to advise stakeholders at a few different phases of the deployment: initial AI performance, workflow incorporation and ongoing maintenance.
“These demand different approaches to data presentation because they seek to answer different questions: ‘Can this AI plausibly inform care processes and drive outcomes,’ ‘Is this AI successfully being incorporated into practice,’ ‘Once incorporated into workflow, is the AI continuing to lead to the outcome improvements we had hoped?’”
Key performance indicators
Performance indicators in an AI dashboard must measure trends and provide correlation in real time, said Hoang of Johns Hopkins.
“It’s not enough that I know our infections in the hospital have grown 10 percent this year,” he explained. “I would need to know that infections have grown 10 percent this year with high correlation to a shift in population presenting with watery stool and a delay in lab culture results being available to the provider, thus a delay in care. The actionable insight would be to deduce with a high level of confidence that I should focus resources on reducing delays in the lab.”
The innovation tracker dashboard at NewYork-Presbyterian identifies current stats, milestones and decision points for each AI project.
“By explicitly identifying the anticipated decision points, it forces thoughtful conversations about the speed and scope of each initiative,” said Daniel Barchi, senior vice president and CIO at NewYork-Presbyterian. “Some of our decision points include how many more bots we want to deploy on a specific business process or how quickly we expand the robotic process automation process to other back-office functions.”
Accountability
With any dashboard, it’s important to have accountability for what the dashboard presents. Add artificial intelligence into the mix with its ability to reach conclusions of its own and accountability suddenly becomes extra important.
“We bring IT, finance, legal, innovation and strategy leaders to an hour-long innovation meeting every week where the tool helps frame the conversation and outlines risks for projects that call for additional focus from individual teams,” said Barchi of NewYork-Presbyterian.
This is more of a question regarding a culture of improvement, said Hoang of Johns Hopkins.
“The true test of whether an AI dashboard is working is if you can see positive improvement in the original question you were trying to answer.”
Etter Hoang, Johns Hopkins Bayview Medical Center
“With any culture of improvement strategy, if senior leadership is not engaged, accountability will be difficult to maintain,” he said. “I always believe data is only as powerful as those that react to it. Accountability structures need to include senior leadership, multi-disciplinary teams and project management that ensures identified opportunities are resourced for optimal response.”
As with many technology transformations in the enterprise, an AI dashboard involves shared ownership among technical expertise and tailored clinical/business expertise that owns the use-case, said Dr. Monsen of Atrius Health.
“In some respects, a single AI dashboard most probably deserves to be broken down by service area, which may include clinical use-cases, operational use-cases, financial use-cases or similar aligned with where the organizational sponsorship for the deployment lives,” he added.
Tracking process
NewYork-Presbyterian’s Barchi’s innovation tracker dashboard summarizes the state of each AI/robotic process automation initiative and frames the discussion of each in the staff’s weekly innovation meeting.
“In the meeting, we use the tracker to review the status of each initiative and evaluate the value and plan for each,” Barchi said. “Our resources for innovation investment are not unlimited, and the innovation tracker helps us decide how to optimize the outcome of our AI and robotic process automation work by focusing our resources.”
Is the dashboard working?
It’s all good and well to have a robust AI dashboard up and running, but if it’s not working, it’s worthless. And could even lead to bad conclusions. So how does a healthcare CIO know his or her AI dashboard is working?
“Part of the success of any dashboard is how often it is used,” said Hoang of Johns Hopkins. “With each dashboard, there should also be pre-defined measures of success so that we can measure progress in relation to time. If the AI dashboard is being used in real time as a decision tool, the more usage it has the more we can assume users are trusting the insights and data. Ultimately, the true test of whether an AI dashboard is working is if you can see positive improvement in the original question you were trying to answer.”
A healthy AI dashboard will lead to a churn on the AI deployments being tracked, said Dr. Monsen of Atrius Health.
“In other words, if more and more AI use-cases are added to the dashboard over time, this may suggest an unbridled excitement for AI without the necessary re-scoping or culling that one might expect as AI deployments reveal themselves as low-value or past their lifecycle,” he explained. “At Atrius Health, where our AI efforts have grown primarily to augment our population health management efforts, the use-cases for AI change only gradually and it is instead the underlying AI that is versioned and improved over time.”
The earliest AI deployment at Atrius focusing on identifying patients at risk for hospitalization, he added, has undergone well over a dozen different iterations, each one performing better than the last.
Twitter: @SiwickiHealthIT
Email the writer: [email protected]