Explainable AI for Training with Weakly Annotated Data with Philips

preview_player
Показать описание

Evan Schwab, Research Scientist, Philips

Abstract:
AI has become a promising predictive engine on the verge of near-human accuracy for important medical applications like the automatic detection of critical findings in medical images for assisting radiologists in clinical tasks such as triaging time-sensitive cases, screening for incidental findings and reducing burnout.

Deep learning technologies, however, commonly suffer from a lack of explainability, which is an important aspect for the acceptance of AI into the highly regulated and high-stakes healthcare industry. For example, in addition to accurately classifying an image as containing a critical finding such as pneumothorax, it’s important to also localize where the pneumothorax is in the image to explain to the radiologist the reason for the algorithm’s prediction.

To this end, state-of-the-art supervised deep learning algorithms can accurately localize objects in images by training on large amounts of locally annotated, pixel-level labels of the object locations. However, unlike natural images where local annotations of everyday objects can be more easily crowd-sourced, in the medical domain, acquiring reliably labeled data for large datasets is an expensive undertaking requiring detailed pixel-level annotations for a multitude of findings agreed upon by multiple trained medical experts. This becomes a nearly impossible requirement and a major barrier for training competitive deep learning algorithms that can scale to the enormous number of different critical findings that can be present in medical images.

In this talk, we address these shortcomings with an interpretable AI algorithm that can classify and localize critical findings in medical images without the need of expensive pixel-level annotations, providing a general solution for training with weakly annotated data that has the potential to be adopted to a host of applications in the healthcare domain.
Рекомендации по теме