Header image

Quantori blog

April 30, 2024

Explainable AI for Medical Image Analysis, a Case Study

Alexander Proutski
Alexander Proutski
Director, Data Science
Quantori
Despite recent progress in the development of Machine Learning models, their application in clinical practice remains limited. Several barriers to entry exist, with the lack of transparency in a model's decision-making process being critical. This article reviews a recent study where explainability was a core component of Machine Learning methodology applied to medical image analysis.

Pulmonary Edema: Why Severity Assessment is Critical

Pulmonary edema, an excessive presence of fluid in the lungs, is a leading cause in patients with congestive heart failure (CHF) requiring in-patient treatment. The optimal course of treatment depends heavily on the severity of the condition. Furthermore, the potential symptom overlap with other disorders, such as pneumonia, requires a thorough and timely diagnosis to prevent further worsening of co-existing conditions.

Challenges within Radiographic Imaging

Radiographic imaging plays a crucial role in this assessment, where radiologists adopt categorical grading scales to assess the degree of severity. However, the accurate quantification of radiographic images is both time-intensive and highly variable among radiological experts, meaning optimal patient care is often delayed before a consensus on the treatment plan can be reached.

Edema image

Machine Learning Methods to Overcome Such Challenges

The application of computerized methods to streamline the assessment of radiographic imaging has been a topic of intense research for quite some time. Furthermore, the recent advancement in ML methods provides a novel way of tackling this assessment problem at hand. In particular, isolating and highlighting regions of an image that an ML model considers important in its decision-making could significantly accelerate the process of reaching a consensus amongst experts.

Limitations of Machine Learning Methods

However, the quest to develop AI-powered medical assistants is still in its early stages. Most of the research into the utilization of ML for radiographic image assessment has focused on developing models with superior performance metrics (e.g., accuracy), while little attention has been paid to the models’ decision-making process. This lack of clarity, coupled with the outdated technology often utilized within clinical practice, has discouraged clinicians from adopting ML in real-life settings, limiting the applicability of ML primarily to research.

Despite this, several attempts have been made to develop explainable ML models for radiographic image assessment, with the development of explanation techniques for ML models also gaining traction.

Towards an Explainable AI Solution

In our recent pilot study, Quantori introduces a two-stage workflow that detects radiographic features associated with pulmonary edema, particularly cephalization, Kerley lines, pleural effusion, bat wings, and infiltrates.

The initial stage isolates the lung area within a radiographic image, ensuring that future decision-making focuses solely on the regions of interest. The subsequent stage focuses on the detection of edema-related features. However, each pulmonary edema feature is highly distinct from the rest, meaning developing a global solution would require a complex and bloated model exceeding realistic limits for use in real-life clinical settings.

AP's ArticleRadiological features of pulmonary edema: Cephalization: cyan polylines; Kerley lines: green lines; Pleural effusions: purple masks; Infiltrates: blue masks; Bat wings: yellow masks

To overcome such limitations, Quantori developed a modular approach where each edema feature would be detected by a distinct model. To achieve this, multiple standard ML models were tested, with special attention given to model size (measured in terms of model parameters). A thorough assessment of different architecture pitfalls and model performance was provided. This ensures that trade-offs between model performance and feasibility of application in a real-life setting can be assessed.

Our study represents a first step in the complex endeavor of developing a smart medical assistant. The modular nature of the work, as well as an in-depth assessment of different standard ML model performances and subsequent comparison against model size, ensures that an optimal solution can be sought, dependent on the restrictions faced within a real-life clinical setting.

Artificial Intelligence
Image Analysis
Quantori Solution
Share

Do you have any thoughts or questions?

We are looking forward to discussing this article with you. Fill out this form or reach out to contact@quantori.com

Please note that by submitting this form, you consent to Quantori processing your personal data as outlined in Data Privacy Policy
This site is protected by reCAPTCHA Enterprise and the Google Privacy Policy and Terms of Service apply

Related Articles