By Fred Donovan, HIT Infrastructure | June 7, 2019
The American Medical Informatics Association (AMIA) is calling on the Food and Drug Administration (FDA) to adopt stronger rules on medical devices that use artificial intelligence algorithms.
In a letter to the FDA about its proposed regulatory framework, AMIA commended the agency for beginning a conversation on medical devices and artificial intelligence.
In the framework, the FDA is proposing an approach that would enable the evaluation and monitoring of an artificial intelligence-based device from its premarket development to post-market performance in order to provide assurance of safety and effectiveness.
The approach would allow the FDA’s regulatory oversight to examine the iterative nature of artificial intelligence products while ensuring that its standards for safety and effectiveness are maintained.
“Our approach will focus on the continually-evolving nature of these promising technologies. We plan to apply our current authorities in new ways to keep up with the rapid pace of innovation and ensure the safety of these devices,” explained then FDA Commissioner Scott Gottlieb, MD.
However, AMIA argued that the FDA did not go far enough in its approach and proposed the following modifications to the framework:
- Stronger emphasis and acknowledgement of how starkly different continuously learning algorithms must be treated from “locked” algorithms
- Examination of how new data inputs will impact the algorithm’s outputs
- Warning about how cybersecurity risks, such as hacking or data manipulation, could influence the algorithm’s output
- Requiring manufacturers to use evolving knowledge about algorithm-driven bias to ensure that algorithms used in medical devices do not promote such bias
“A growing body of knowledge indicates that even in the absence of intended discrimination, bias against persons of particular ethnicities, genders, ages, socioeconomic backgrounds, physical and cognitive abilities, and other characteristics may occur. We recommend that FDA develop guidance about how and how often developers of SaMD [software-as-a-medical device]-based products test their products for such biases and adjust algorithms to eliminate identified biases,” the letter noted.