March 20, 2020

Artificial Intelligence for Medicine

In deep learning, a subset of a type of artificial intelligence called machine learning, computer models essentially teach themselves to make predictions from large sets of data. The raw power of the technology has improved dramatically in recent years, and it’s now used in everything from medical diagnostics to online shopping to autonomous vehicles.

But deep learning tools also raise worrying questions because they solve problems in ways that humans can’t always follow. If the connection between the data you feed into the model and the output it delivers is inscrutable — hidden inside a so-called black box — how can it be trusted? Among researchers, there’s a growing call to clarify how deep learning tools make decisions — and a debate over what such interpretability might demand and when it’s truly needed. The stakes are particularly high in medicine, where lives will be on the line. for more info ai online training

Deep learning tools also raise worrying questions because they solve problems in ways that humans can’t always follow.

Still, the potential benefits are clear. In Mass General’s mammography program, for instance, the current deep learning model helps detect dense breast tissue, a risk factor for cancer. And Lehman and Regina Barzilay, a computer scientist at the Massachusetts Institute of Technology, have created another deep learning model to predict a woman’s risk of developing breast cancer over five years — a crucial component of planning her care. In of mammograms from about 40,000 women, the researchers found the deep learning system substantially outperformed the current gold-standard approach on a test set of about 4,000 of these women. Now undergoing further testing, the new model may enter routine clinical practice at the hospital.

As for the debate about whether humans can really understand deep learning systems, Barzilay sits firmly in the camp that it’s possible. She calls the black box problem “a myth.”

One part of the myth, she says, is that deep learning systems can’t explain their results. But “there are lots of methods in machine language that allow you to interpret the results,” she says. Another part of the myth, in her opinion, is that doctors have to understand how the system makes its decision in order to use it. But medicine is crammed with advanced technologies that work in ways that clinicians really don’t understand — for instance, the magnetic resonance imaging (MRI) that gathers the mammography data, to begin with.

Understanding how the models work matters in some applications more than others. Worries about whether Amazon is offering perfect suggestions for your aunt’s birthday gift aren’t the same, for example, as worries about the trustworthiness of the tools your doctor is using to detect tumors or oncoming heart attacks.

AI technology in medicines

Computer scientists are trying many approaches to make deep learning less opaque, at least to their peers. A model of breast cancer risk, for example, can use a heat map approach, letting radiologists zoom into areas of the mammography image that the model pays attention to when it makes a prediction. The model can then extract and highlight snippets of text that describe what it sees. AI Online Training Hyderabad

Deep learning models can also present images of other regions that are similar to these targeted areas, and human experts can then assess the machine’s choices. Another popular technique applies math that is more immediately understandable to subsets of the data to approximate how the deep learning model is handling the full dataset.

“We will learn more about what explanations are convincing to humans when these models are integrated into care, and we can see how the human mind can help to control and validate their predictions,” Barzilay says.

In London, a team from Moorfields Eye Hospital and DeepMind, a subsidiary of Google parent company Alphabet, also seeks to deliver explanations in depth. They have used deep learning to triage scans of patient eyes. The system takes in three-dimensional eye scans, analyzes them, and picks cases that need an urgent referral — and it works as well as or better than human experts. The model gives and rates several possible explanations for each diagnosis and shows how it has labeled the parts of the patient’s eye.