By Meg Bryant, HealthcareDIVE | February 7, 2019
Dive Brief:
- The Connected Health Initiative’s Health AI Task Force has released a set of policy principles to guide use of artificial intelligence systems in healthcare.
- The task force recommends AI policy frameworks use risk-based approaches to ensure AI use aligns with accepted safety, efficacy and ethical standards. It also calls for AI systems to be designed with real-world workflow and usability principles in mind.
- The report comes as AI and machine learning are already impacting healthcare by improving diagnosis and treatment and reducing administrative burden.
Dive Insight:
As use of AI in medicine grows, there will be a need for regulations to address their unique properties and assure they fulfill their promise to empower patients and enhance access to care. Stakeholders will want to avoid regulations that hold back innovation, while making sure they are stiff enough to protect patients and secure data.
Spending on AI is projected to top $34 billion by 2025, dwarfing the $2.1 it received last year, according to market intelligence firm Tractica. Major industry players like Optum and Anthem, as well as would-be disruptors like Google, are all exploring partnership and project using AI and machine learning tech.
A lot still has to happen for the potential to be realized, however.
“I believe that the full potential [of predictive modeling] is still on the horizon because physicians and insurers (in particular) still need to be convinced of the benefit,” Global Data’s EVP of healthcare operations and strategy, Bonnie Bain, told Healthcare Dive in a recent interview.
And despite the importance of AI, healthcare executives say they are“confused by the buzz” surrounding it and struggling to make sense of what role it will play in the industry.
The Connected Health Initiative report identifies four key aims all AI systems should embrace — improving population health, improving health outcomes and patient satisfaction, increasing value through lower costs and enhancing well-being of healthcare providers.
Advancing this “quadruple aim” requires careful thought about a range of issues including research, quality assurance and oversight, design, access and affordability, ethics, privacy and security, interoperability, workplace demands, bias and education.
These issues should all be fleshed out in a national health AI strategy, along with guidance to help developers, providers, payer and patients realize AI’s potential, the task force says. The report points to the UK’s Initial Code of Conduct for Data Driven Care and Technology as an example of a forward-looking strategy to address common concerns.
“Given the significant role of the government in the regulation, delivery, and payment of healthcare, as well as its role as steward of significant amounts of patient data, a federal healthcare AI strategy … will be vital to achieving the promise that AI offers to patients and the healthcare sector,” according to the report.
Research on AI tools should prioritize funding and incentives, such as tax credits and streamlined access to data, to encourage private and nonprofit participants. Public support should be conditioned on promoting shared knowledge, access and innovation, the report says.
For quality assurance and oversight, policy frameworks should ensure algorithms, datasets and decisions can be audited and clinically validated. Developers should adhere to “rigorous procedures” and document their methods and results, and adverse events should be reported to regulatory or other oversight bodies for review and response.
The report also calls on policymakers recommends steps to ensure access and affordability, such providing incentives to invest in building the necessary infrastructure for AI systems to operate on and personnel and training.
“While AI systems should help transition to value-based delivery models by providing essential population health tools and providing enhanced scalability and patient support, in the interim payment policies must incentivize a pathway for voluntary adoption and integration of AI systems into clinical practice as well as other applications under existing payment models,” according to the report.
The report also encourages development of ethical guidelines to address issues unique to health AI and ensure AI is inclusive across different geographies and patient demographics. Policy frameworks should be scalable and ensure patient privacy, while facilitating the movement of health information.
Policy frameworks must also assure data access through collaboration and interoperability, and should help protect against potential mistakes of bias in data. To that end, stakeholders should be required to identify, disclose and mitigate data bias when it occurs and to ensure bias doesn’t cause patient harm, according to the paper.
Recommended Reading:
- actonline.orgPolicy Principles for Artificial Intelligence in Health
- Healthcare DiveHow AI could shape the health tech landscape in 2019