By Paul Cerrito and John Halamka, Healthcare Analytics News | November 26, 2018
It may be hard to imagine a time when prescribing a mobile health (mHealth) app for patients is as common as writing a script for an antibiotic or statin, but if the quality of mHealth apps continues to improve, that future might not be too far away.
But for now, many physicians and nurses have serious doubts about the trustworthiness of mHealth apps. They question whether the apps are scientifically accurate, whether they are more effective than printed patient education materials and whether patients will use mHealth apps once the novelty wears off. On the other hand, clinicians are more willing to use medical apps that directly meet their professional needs, such as digital drug reference guides and clinical decision support systems.
Physicians and patients seem to view mHealth apps quite differently. One survey suggests that only one in three physicians recommends wellness apps to their patients — but 58 percent of consumers who use smartphones have downloaded at least one health app, and 41 percent have downloaded more than five.
Why the disparity? Might it be a function of doctors’ training or exposure to medical literature? A case in point is the Owlet Smart Sock, which new parents can put on a newborn to help monitor a child’s heart rate and oxygen saturation. The device sends readings to an mHealth app that lets parents track the child’s vital signs. Most pediatricians are quick to point at that this type of monitoring is completely unnecessary for a healthy infant. And the American Academy of Pediatrics’ policy statement on tracking this type of data for a healthy infant states, “Do not use home cardiorespiratory monitors as a strategy to reduce the risk of SIDS.”
But the reluctance of health professionals to recommend mHealth apps is also prompted by their skepticism about the effects of mHealth apps on outcomes and their concerns that the apps may be unsafe or inconsistent with evidence-based clinical guidelines. Such skepticism has prompted some medical experts to refer to mHealth apps as “digital snake oil.” That extreme view isn’t justified, at least for many apps that have been tested in controlled clinical studies.
Evaluating Patient-Facing mHealth Apps
A review of mobile apps that addresses the needs of patients with clinical depression, for instance, found that 12 of the apps delivered cognitive behavioral therapy (CBT) or behavioral activation (BA), both of which are supported by scientific evidence. Anna Huguet, with the Center for Research in Family Health in Halifax Nova Scotia, and her colleagues found that two apps scored the highest in adhering to CBT or BA principles, eCBT Mood and Depression CBT Self-Help Guide. In a separate randomized clinical trial, a chatbot called WoeBot, which uses a text-based conversational agent to work with students in need of mental health services, has proven effective. Kathleen Fitzpatrick, Ph.D., and associates with Stanford School of Medicine, divided college students into two groups, one of which had access to the chatbot and the other acting as a control. After two to three weeks, students using WoeBot reported significant improvement relative to the control group, who only had access to an e-book on depression in college students.
The IQVIA Institute for Human Data Science, a research organization that serves the healthcare IT industry, has also analyzed studies that support mHealth apps. It used an evidence rating system that included multiple meta-analyses, followed by a single meta-analysis, multiple randomized controlled trials, a single RCT and observational studies, in descending order of strength. Among the mHealth apps it considered potentially disappointing were those designed to address exercise, pain management, dermatology, autism, schizophrenia, multiple sclerosis and autism. But IQVIA considered candidates for clinical adoption mHealth apps for weight management, asthma, COPD, congestive heart failure, stroke, arthritis, cancer, PTSD, insomnia, smoking cessation, stress management, cardiac rehabilitation and hypertension. The analysis also concluded that there were apps for diabetes, depression and anxiety that would be worth including in clinical guidelines. Finally, IQVIA published a list of top-rated mobile health apps, which included mySugr for diabetes management, SmartBP for hypertension, Headspace for stress management and KWIT for smoking cessation.
Of course, efficacy should not be the only metric used when considering an mHealth app. A recent study from JMIR Mhealth Uhealth also stresses the value of usability ratings and user testing: “Typically, usability is measured across dimensions such as user ratings of app flexibility, operability, understandability, learnability, efficiency, satisfaction, attractiveness, consistency and error rates.” Many of these metrics can help developers create a new mHealth app that appeals to patients. So can relevant standards spelled out by the International Organization for Standardization.
When choosing an mHealth app to recommend to patients, clinicians need to be assured that it is not only effective and easy to use but safe. That is not always the case. For example, if an app is designed to collect blood glucose levels, does it alert the patient if the readings are dangerously low or high? Similarly, if the app is connected to a wireless blood pressure cuff, will it tell users to see their provider when it registers very high or low readings? Karandeep Singh, M.D., of the University of Michigan Medical School, and his associates analyzed 121 mHealth apps that let patients record health-related data, finding that only 23 percent informed users that the data they entered were dangerous when they veered widely from normal values. Among the signposts that these mHealth apps ignored was a user’s suicidal mood or ideation.
Of equal concern was that in many disease categories, an mHealth app’s reactions to indications of a health danger were inadequate. For instance, researchers found that the percentage of danger alerts considered appropriate fell below 50 percent for apps used to manage cancer, dementia, hypertension, coronary heart disease, congestive heart failure, liver disease, chronic kidney disease and diabetes. Singh and his colleagues wrote, “While the app industry needs to do further work to meet basic safety and privacy standards, a subset of apps already conform to these standards. Policymakers need to consider how to encourage app developers to build apps that respond appropriately to dangerous information entered by users.”
Finally, as clinicians evaluate mHealth apps, they may want to consult respected professional organizations. For instance, the AMA has partnered with the American Heart Association and the Healthcare Information and Management Systems Society (HIMSS) to create an alliance — Xcertia — that sets guidelines to evaluate the quality, safety and effectiveness of mHealth apps.
The American Psychiatric Association has also taken a position on mHealth apps that can help clinicians make more informed choices. Like the AMA and its partners, APA is not rating individual apps but offering providers guidelines to help them make their own decisions. The association spelled out a five-step evaluation process and recommended clinicians collect background information, take a close look at the app’s security safeguards, review the evidence, evaluate ease of use and analyze the interoperability of the product.
Enthusiasm for Doctor-Facing mHealth Apps
The list of mobile apps that physicians and nurses find useful in their professional lives is long and continues to grow. In a recent article Healthcare Analytics News™, we discussed mobile versions of UpToDate and ClinicalKey, two clinical decision support tools from Wolters Kluwer and Elsevier. AirStrip Technologies has also made a strong impression among clinicians with its remote patient monitoring application, which gives users near real-time access to bedside ECG readings, fetal monitoring data, multiple electronic health record (EHR) systems and more. They can tap into these resources from a hospital or community-based care setting, accessing medical devices, secure message systems, lab results and radiographic images as well.
Similarly, Isabel Healthcare has gained favor among physicians who realize the need for computer-based diagnostic and treatment aids. It includes a patient-facing symptom checker and several tools to help clinicians interpret diagnostic clues. Isabel Pro DDX Generator uses natural language processing to access a database of disease presentations. The clinician inputs lab readings, vital signs, co-morbidities, age, gender and other parameters in free text or from an EHR system, and the company’s algorithms use the data to suggest possible diagnoses.
Is the FDA “Seal of Approval” Enough?
In recent years, the U.S. Food and Drug Administration (FDA) has taken a stand on mHealth apps, publishing regulations to guide developers as they design them and to help clinicians as they evaluate their worth. But these regulations don’t guarantee effectiveness or safety. The agency has made clear that it will not regulate all mHealth apps. Instead, it is concentrating on “most mobile apps that are intended to treat, diagnose, cure, mitigate or prevent disease or other conditions as medical devices under federal statute.” The FDA refers to these as “mobile medical apps.” On the other hand, apps that make wellness claims or apps that are designed to receive, transmit, store and provide information displays are receiving less attention from the agency. In fact, the latest FDA guidelines state that these apps will not require premarket review.
The FDA provides a list of mobile medical apps that it has cleared, including diagnostic spirometer GoSpiro, continuous glucose monitoring system Dexcom and drug dose calculator MyDose Coach. Apps that the FDA does not consider medical devices and therefore do not need premarket approval include medical dictionaries, first aid encyclopedias and surgical training videos. A third category, for which the agency says it will “exercise enforcement discretion”, includes patient portal apps and those “intended for individuals to log, record, track, evaluate or make decisions or behavioral suggestions related to developing or maintaining general fitness, health or wellness.”
That more lenient approach to mHealth app regulation doesn’t mean these software platforms are free from problems and scrutiny. For example, in 2017, the New York Attorney General accused vendors of three mHealth apps of misleading consumers and not adequately protecting their data. Adidas’ Runtastic, MIT Media Lab spinoff Cardiio and Matis’ My Baby’s Beat claimed to monitor heartbeat using a smartphone camera or microphone and algorithms. But all three faced legal action, ultimately paying a combined $30,000 in penalties and agreeing to change their advertising and update their data privacy policies.
About the Authors
Paul Cerrato has more than 30 years of experience working in healthcare as a clinician, educator, and medical editor. He has written extensively on clinical medicine, electronic health records, protected health information security, practice management, and clinical decision support. He has served as editor of Information Week Healthcare, executive editor of Contemporary OB/GYN, senior editor of RN Magazine, and contributing writer/editor for the Yale University School of Medicine, the American Academy of Pediatrics, Information Week, Medscape, Healthcare Finance News, IMedicalapps.com, and Medpage Today. HIMSS has listed Mr. Cerrato as one of the most influential columnists in healthcare IT.
John Halamka, M.D., M.S., is the international healthcare innovation professor at Harvard Medical School, chief information officer of the Beth Israel Deaconess System, and a practicing emergency physician. He strives to improve healthcare quality, safety, and efficiency for patients, providers, and payers throughout the world using information technology. He has written 5 books, several hundred articles, and the popular Geekdoctor blog.