By Jennifer Bresnick, Health IT Analytics | April 9, 2019

Deep learning is moving out of the realm of the theoretical and is starting to help physicians treat everyday conditions that affect millions of patients.

No longer the exclusive provenance of researchers and academics, artificial intelligence is quickly filtering into the everyday clinical setting.

From supporting radiologists to enhancing the impact of the patient’s voice in her own care, AI is already producing meaningful impacts on the quality and accuracy of care.

Connecting human intelligence and clinical expertise with the unparalleled data processing power of deep learning algorithms and advanced neural networks is opening up a new frontier for precise, personalized diagnostics and treatment – but not just for rare cancers or one-in-a-million genetic conditions.

At the 2019 World Medical Innovation Forum hosted by Partners HealthCare, the focus is very much on the mundane.

In a series of rapid-fire presentations, Harvard faculty and practicing clinicians from across the Partners network showcased how AI can help everyday doctors get better at treating everyday patients with common conditions such as glaucoma, breast cancer, stroke, and neurodegenerative disease.

READ MORE: Finding a Happy Home for Artificial Intelligence in Healthcare

Using AI to expand the health system’s capacity to conduct effective screenings, reduce pain points in the care process, and augment the clinical decision-making process can help the industry save billions of dollars each year – and more importantly, potentially save an untold number of lives.

Healthcare providers are still too reactive when it comes to diagnosing patients with high-impact conditions, stressed presenter after presenter.

“Our current tools to predict future risks are simply not accurate,” said Constance Lehman, MD, PhD, chief of the breast imaging division at Massachusetts General Hospital (MGH) and a professor of radiology at Harvard Medical School.

“For breast cancer, the emphasis is still on late-stage diagnosis, because we are not screening as comprehensively or as well as we should be.”

Even patients who do get mammograms at recommended intervals are not receiving uniformly high-quality care from radiologists, she said.

READ MORE: How Artificial Intelligence is Redefining Consumer Health Experiences

“Human interpretation of images is highly subjective.  We don’t have enough people to read these images, and we don’t have enough people who can do it to the highest standards,” she stated.

Forty percent of certified breast imaging radiologists perform outside of recommended ranges for acceptable specificity, her previously published research has shown. Even agreement around one of the most important fundamental predictors for breast cancer, the density of the tissue, can vary wildly.

Some radiologists designate less than 10 percent of breast tissue as dense, said Lehman, while others will label more than 80 percent of mammograms in the same way.

Deep learning can help providers do more with less – and do it more accurately.

“Deep learning can use full-resolution mammogram images to accurately predict the likelihood of a woman developing breast cancer.  Importantly, it is accurate across all races.  Existing models are worse than chance at predicting breast cancer in African American women.  We need something better than that.”

READ MORE: As Artificial Intelligence Matures, Healthcare Eyes Data Aggregation

An algorithm trained on more than 70,000 images consistently outperformed the commonly used risk model, even when the tool lacked additional data on the patient and only had access to the image itself.

With more than two million new breast cancer diagnoses each year, improving the health system’s ability to identify individuals at risk and provide early treatment to those with cancer could have a drastic impact on outcomes for hundreds of thousands of women.

Providers treating another common condition, glaucoma, can also benefit from some extra help to get ahead of a challenging situation, said Nazlee Zebardast, MD, an instructor of ophthalmology at Harvard Medical School.

Early detection and treatment are also key for this degenerative vision disease, often called the “silent thief of sight,” she said.

Glaucoma can cause blindness if caught too late, and contributes to nearly $4 billion in direct medical spending every year.

“Because of its slow progress, it often doesn’t come to the attention of clinicians until it’s very advanced,” Zebardast explained.  “Nearly half of glaucoma cases are undiagnosed.  Unfortunately, that’s partly because there is no reliable risk group except for the aged.  Ideally, we would screen everyone over the age of 40, but that will require a lot of experts and will be very expensive.”

Artificial intelligence can start to fill in those gaps, she asserted.  “It’s clear that we need an efficient and effective screening tool to make testing more accessible.”

Deep learning is currently being used in some settings to screen for glaucoma, but it isn’t as reliable or accurate as it could be, she said.  These algorithms rely on subjective labeling of eye images by experts.  And just like with breast cancer clinicians, opinions and skills among ophthalmologists can vary significantly.

“No algorithm can do better than what it uses to learn,” said Zebardast. “We need to use objective data in addition to clinical opinion to come up with a better reference standard.”

Zebardast is working to train deep learning algorithms that can use the data in eye images to identify standardized reference points for early-stage glaucoma that can create risk scores for patients.

“Ultimately, we aim to use disease-predicting image features identified in this study to construct multi-modal models to improve our detection and prediction rates for glaucoma,” she said.

Stroke is another area where AI can shine, added Synho Do, PhD, director of the Laboratory of Medical Imaging and Computation at MGH.  With 5.8 million global deaths annually, 140,000 of which are in the United States, the urgent need for speedy, accurate diagnostic technology cannot be overstated.

“Stroke care is extremely time-sensitive,” said Do.  “Not everyone lives right next to a good hospital, and not every hospital has a stroke expert on staff.”

One of the most important parts of stroke care is distinguishing between the two different types of stroke, ischemic and hemorrhagic, and locating the bleed in the brain.

Artificial intelligence can support radiologists as they identify the extent of the damage.  But an algorithmic companion is only effective if the reasoning behind the suggestion is plain to see, he added.

“It is very important to use explainable AI in diagnostics,” Do said.  “We need to stay away from black box tools.  If a doctor said he could diagnose you, but couldn’t explain why they offered the diagnosis, how could you trust what they are saying?  It is the same with artificial intelligence.”

Do and his team have developed an algorithm that offers visual insights into why the deep learning tool identified a stroke as ischemic or hemorrhagic.

The output summary includes a color-coded “attention map” overlaid on slices of the radiology image, he explained.  These images show where the AI was “looking” when making its determination.

“The explicability of an algorithm is essential not only for understanding the system’s predictions, but also for continuing improvement and optimization,” he said.

With the deep learning tool in hand, all radiologists can achieve expert-level performance on stroke diagnosis, the team found.  In places where resources are scarce, democratizing access to clinical decision support could help patients achieve significantly better outcomes.

Imaging analytics can be similarly effective for improving the safety and accuracy of liver biopsies, said Tina Kapur, PhD, executive director of image-guided therapy at the Brigham & Women’s Hospital.

Nearly a million patients worldwide undergo an image-guided liver biopsy every year, Kapur said, and the number of biopsies is expected to increase at an annual rate of 4 percent.

Improperly guided needles can cause significant bleeding and increase the risk of infection, yet current ultrasound technology offers incredibly poor visual guidance for clinicians – and more than 90 percent of biopsies are currently conducted freehand.

Deep learning neural networks can do a much better job than humans at identifying man-made objects, such as needle tips, in real-time ultrasound images.  Software based on deep learning algorithms can show clinicians when the tip of the needle enters the plane of the suspect liver structure, giving providers much-needed insight into how to maneuver to capture a tissue sample.

Kapur’s team envisions integrating the deep learning into the existing workflow of biopsy technicians, keeping the procedure low-cost and preventing additional complexity.

“On ultrasounds, clinicians can press a button to turn on doppler information already,” she explained.  “The overlay stays there for as long as the button is pressed.  We want to add a button that says ‘needle,’ which shows the outline and trajectory of the needle for as long as the physician needs it.  If they don’t want it, they can turn it off.”

“You don’t need extra hardware, and you don’t need to change anything about the workflow. You can imagine how quickly we will create better accuracy and safety while reducing the bleeding and complications that physicians worry about the most when doing biopsies.”

Artificial intelligence can also help clinicians learn more about themselves, said Chris Sidey-Gibbons, PhD, co-director of the PROVE Center at Brigham and Women’s.  By using natural language processing to analyze patient-reported outcomes data, providers can glean more insight into the impact of their care delivery strategies.

“There is a great deal to be achieved from the relatively simple intervention of the questionnaire,” Sidey-Gibbons said.  “But we have been using the same format since the 1960s.  We used to have a piece of paper that we handed to patients.  Now we have an electronic form that just mimics that piece of paper – not much of an advancement.”

“Machine learning can improve the response rate to questionnaires and improve the usefulness of the data through adaptive testing: something that is widely used for academic assessments, but isn’t being used regularly in healthcare.”

Adaptive testing can imitate the best practices of intelligent doctor, he said, by only asking the questions that are most relevant to the patient’s unique and specific situation.

Collecting more accurate and relevant data can make patient-reported data more actionable, something that has been a challenge for providers faced with combing through huge volumes of free-text responses or nuggets of information hidden in rambling clinical notes.

Shorter, more tailored questionnaires could save years’ worth of time for patients at scale, he explained, while natural language processing can extract meaningful information from numerous free-text sources.

“Patients can respond to provider questions in a natural, freeform manner, but we can still structure the information afterwards and use it for performance improvement,” he said.

“The limitations of patient-reported data tools can be overcome using computerized adaptive testing and machine learning.”

Ultimately, AI can help equip providers with the tools they need to provide care that is significantly safer, more consistent, more accurate, and more patient-friendly.  And these strategies are moving into the real world of clinical care at a breakneck pace.

Many of the presenters have validated and tested their algorithms extensively, and are only steps away from bringing their ideas to market.  As more and more innovative models start to find their commercial footing, millions of patients will benefit from low-cost, high-quality clinical decision support tools.

These advances are happening right now and are expected to have an unprecedented impact on care, said Vesela Kovacheva, MD, PhD, an anesthesiologist at Brigham & Women’s working on a machine learning tool to improve the delivery of sedatives during caesarian sections.

“Having access to real-time machine learning in the operating room would be an amazing superpower for me,” she said.  “I am fortunate to have all the resources of Partners HealthCare behind me, and it would still be a superpower.”

“Imagine what a difference AI will make in rural hospitals or developing countries, where one anesthesiologist could be covering three operating rooms at the same time.  If AI could be game-changing for me, just think about what it could do for patients receiving care in those places.  We can hardly quantify how much that will change the way we practice medicine.”