By Kevin Truong, MedCity News | March 29, 2019
Based on the literature review conducted by the researchers, only 14 percent of top ranked behavioral health apps described design or development that was based on real-world evidence.
There’s been an explosion in the number and variety of digital apps purporting to address behavioral health issues, but a recent study published in Nature Digital Medicine casts doubts on their backing by legitimate scientific research.
The growing mental health crisis in the country has been compounded by a shortage of behavioral health specialists and entrepreneurs and app makers have pitched their technology as a way to fill in the gap. These consumer-facing technologies, however, are largely unregulated by the FDA.
Researchers searched the Google Play and iTunes store for applications that related to mental health issues like depression, self-harm, substance use, anxiety, and schizophrenia and scanned their descriptions to identify their claims of effectiveness. Then they tested those claims against the scientific literature.
Out of the 73 applications (chosen for their high app store rank) examined by the study, 64 percent claimed that they could diagnose a mental health condition, improve symptoms or management of the user’s condition.
Scientific language was used by 44 percent of these apps to support those effectiveness claims. Only 53 percent of those could actually be linked with evidence in scientific literature, while one-third described techniques not validated by scientific research.
Based on the literature review conducted by the researchers, only 14 percent of apps described design or development that was based on real-world evidence.
“Scientific language was the most frequently invoked form of support for use of mental health apps; however, high-quality evidence is not commonly described,” the researchers wrote.
“Improved knowledge translation strategies may improve the adoption of other strategies, such as certification or lived experience co-design.”
This holds particular relevance as the FDA looks to shift its approach to regulating health applications through its Pre-Cert Pilot Program, which leans more toward approving developers rather than individual products. A key part of that program is the monitoring real-world performance of digital health products.
A preliminary investigation by the researchers found that for apps targeting depression, 38 percent of app store descriptions included effectiveness claims, while a mere 2.6 percent actually provided evidence backing those claims.
Among the larger group of 73 apps, only two provided direct evidence of scientific research associated with app use. One app description also cited a validation paper for a self-reported questionnaire.
“While these cases represent the best evidence provided by apps in this study, they still fall short of high-quality evidence obtained, for example, from randomised controlled trials,” researchers wrote.