Explainable AI: Enhancing Trust and Transparency in Healthcare

July 20, 2024

Introduction

One of the main challenges of AI and automation in healthcare is ensuring that experts understand and trust the system's decisions. Providing insights into model decisions is crucial for fostering transparency, accountability, and trust. Researchers and R&D departments work with diverse and variable data types depending on their projects, necessitating different Explainable AI (XAI) methods. For instance, image data might require rule-extraction or GradCAM visualizations, while tabular data might be best explained using SHAP, which offers solid theoretical foundations and both local and global explanations. This variety increases the complexity of the task, as multiple explainability methods must be tested and applied to find the most suitable ones.

The Need for Explainable AI

Automated creation of explainability pipelines is essential. Integrating XAI effectively in digital health data is crucial for precision medicine, helping to optimize patient healthcare and improve predictive modeling. Automated pipelines allow researchers to leverage XAI in their respective fields, driving impactful research while lowering the entry barrier to this powerful technology. With automated pipelines, researchers can focus on their core research areas without needing to stay updated on advancements in AI or the niche field of XAI, ensuring that AI systems remain transparent and trustworthy. Additionally, clinical validation and quality assessment of XAI methods are vital for ensuring the reliability and acceptance of AI models in healthcare.

Key Features of Explainable AI in Healthcare

Increase Trust and Transparency
  • Rich Evidence Packages: Provide comprehensive evidence for AI insights with tools for streamlined investigation, increasing trust in AI-driven research outcomes.
  • Integrate Explanations: Incorporate AI explanations into workflows using leading interpretability methods like LIME, SHAP, and ELI5.
Model Lineage
  • End-to-End Transparency: Ensure full transparency with end-to-end lineage tracing each AI insight back to raw data. Automatically track the history of critical model assets, including features, runtimes, hyperparameter tuning, models, and outputs.
  • Auditability: Allow compliance teams to audit any model or prediction at any point in time.
Bias and Fairness
  • Mitigate Bias: Use comprehensive toolkits and techniques to ensure equitable and fair outcomes. Integrate with libraries like AI Fairness 360, Alibi, and Aequitas.
Model Risk Management
  • Strict Access Controls: Implement customizable access controls and metadata records to meet rigorous security and governance requirements.
  • Review Model History: Users can review the complete model history, including user actions, model signing, verification, data lineage, and versioning. Configure access for all model artifacts, notebooks, input data, features, models, and outputs.
Human-AI Collaboration
  • Optimal Decision Making: Balance AI automation with human intervention for better decision-making in research. Configure automatic model retraining and promotion based on performance metrics, set up expert approval processes, and enable human-in-the-loop AI supervision across the research enterprise.

By incorporating Explainable AI, AICU ensures that researchers understand the "why" and "how" behind AI-driven insights, fostering trust and facilitating informed, transparent, and equitable medical research.

Captum: Enhancing Explainability in PyTorch

Captum, introduced by Narine Kokhlikyan et al., is a unified and generic model interpretability library for PyTorch that significantly enhances the explainability of AI models. Captum includes implementations of various gradient and perturbation-based attribution algorithms, such as Integrated Gradients, DeepLift, SHAP, and GradCAM, making it a versatile tool for explaining both classification and non-classification models.

Key features of Captum include:

  • Multimodality: Supports different input modalities such as image, text, audio, or video.
  • Extensibility: Allows adding new algorithms and features, making it highly adaptable.
  • Ease of Use: Designed for easy understanding and application, facilitating its use in both research and production environments.
  • Captum Insights: An interactive visualization tool built on top of Captum, enabling sample-based model debugging and visualization using feature importance metrics.

By incorporating Captum, researchers can leverage a robust and scalable explainability tool that supports various attribution methods and provides comprehensive evaluation metrics like infidelity and maximum sensitivity, ensuring the reliability of model explanations.

SHAP: Enhancing Interpretability with Unified Attribution

SHAP (SHapley Additive exPlanations), introduced by Scott M. Lundberg and Su-In Lee in their paper "A Unified Approach to Interpreting Model Predictions," is a powerful tool for interpretable machine learning based on game theory (Lundberg, 2017). SHAP provides a unified approach to explaining the output of any machine learning model by connecting optimal credit allocation with local explanations using Shapley values from game theory and their related extensions.

Key features of SHAP include:

  • Unified Framework: Combines various interpretability methods into a single cohesive approach, providing a consistent way to interpret different models.
  • Theoretical Foundation: Based on solid theoretical foundations from game theory, ensuring that the attributions are fair and consistent.
  • Model Agnostic: Works with any machine learning model, from simple linear models to complex deep learning architectures.
  • Local and Global Interpretability: Provides both individual prediction explanations and global model insights, making it versatile for different use cases.

SHAP approximates Shapley values using methods like Kernel SHAP, which uses a weighting kernel for the approximation, and DeepSHAP, which leverages DeepLift for approximation in deep learning models. By incorporating SHAP, researchers and practitioners can gain a deeper understanding of their models, ensuring more transparent and accountable AI systems.

LIME: Local Interpretable Model-Agnostic Explanations

LIME (Local Interpretable Model-agnostic Explanations), introduced by Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin in their paper "Why Should I Trust You?": Explaining the Predictions of Any Classifier," is an innovative technique for explaining individual predictions of black box machine learning models (Ribeiro, 2016). LIME approximates the local behavior of a model by fitting a simple, interpretable model around each prediction, providing insights into which features are most influential in the decision-making process.

Key features of LIME include:

  • Local Approximation: Focuses on explaining individual predictions by approximating the model locally with an interpretable model, such as linear regression or decision trees.
  • Model Agnostic: Applicable to any machine learning model, regardless of its complexity or structure.
  • Flexibility: Can be used with various data types, including tabular data, text, and images.
  • Human-Readable Explanations: Generates explanations that are easy to understand, facilitating trust and transparency.

LIME modifies a single data sample by tweaking feature values and observes the resulting impact on the output. It performs the role of an "explainer" by providing a set of explanations representing the contribution of each feature to a prediction for a single sample, offering a form of local interpretability.

Grad-CAM: Visual Explanations for Deep Learning Models

Grad-CAM (Gradient-weighted Class Activation Mapping), introduced by Ramprasaath R. Selvaraju and colleagues in their ICCV 2017 paper "Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization," is a technique designed to provide visual explanations for decisions made by convolutional neural networks (CNNs) (Selvaraju, 2017). Grad-CAM uses the gradients of target outputs flowing into the final convolutional layer to produce a heatmap that highlights important regions in the input image, revealing the model’s focus areas.

Key features of Grad-CAM include:

  • Visual Explanations: Produces heatmaps that highlight the regions of the input image that are most relevant to the model’s predictions.
  • Model Compatibility: Applicable to a wide variety of CNN-based models, including those with fully-connected layers, structured outputs, and multimodal inputs.
  • Easy Integration: Can be easily integrated into existing deep learning workflows with minimal modifications.
  • Interpretability: Helps users understand which parts of an image contribute to the classification decision, enhancing the interpretability of deep learning models.
  • Enhanced Visualization: Combines with fine-grained visualizations to create high-resolution, class-discriminative visual explanations.

Grad-CAM is particularly useful for applications in computer vision, where visual explanations can provide valuable insights into the decision-making process of deep learning models. It also lends insights into failure modes, is robust to adversarial images, and helps identify dataset biases. And alternative to Grad-Cam are Efficient Saliency Maps for Explainable AI, introduced by T. Nathan Mundhenk, Barry Y. Chen, and Gerald Friedland in their 2019 paper.

ProtoPNet: Interpretable Image Recognition

ProtoPNet (Prototypical Part Network), introduced by Chaofan Chen et al. in their paper "This Looks Like That: Deep Learning for Interpretable Image Recognition," is a novel deep network architecture designed to provide interpretable image classification (Chen, 2019). ProtoPNet reasons in a way that is qualitatively similar to how experts like ornithologists or physicians explain challenging image classification tasks by dissecting the image and pointing out prototypical aspects of one class or another.

Key features of ProtoPNet include:

  • Prototypical Reasoning: Dissects images by finding prototypical parts and combines evidence from these prototypes to make a final classification.
  • Interpretability: Offers a level of interpretability that is absent in other interpretable deep models, making it easier for users to understand and trust model decisions.
  • No Part Annotations Needed: Uses only image-level labels for training without requiring annotations for parts of images.
  • Comparable Accuracy: Achieves accuracy comparable to non-interpretable counterparts and can be combined with other networks for enhanced performance.

ProtoPNet has been demonstrated on datasets like CUB-200-2011 and Stanford Cars, showing that it can achieve high accuracy while providing clear, interpretable insights into model decisions. This makes ProtoPNet a valuable tool for domains where understanding the rationale behind image classifications is critical, such as healthcare and wildlife conservation. By dissecting images and using prototypical parts, ProtoPNet mimics human reasoning, enhancing trust and transparency in AI-driven image recognition tasks.

OmniXAI: A Library for Explainable AI

OmniXAI (short for Omni eXplainable AI) is an open-source Python library designed to provide comprehensive explainable AI (XAI) capabilities. Developed by Wenzhuo Yang, Hung Le, Tanmay Laud, Silvio Savarese, and Steven C. H. Hoi, OmniXAI aims to be a one-stop solution for data scientists, ML researchers, and practitioners who need to understand and interpret the decisions made by machine learning models (Yang, 2022). This versatile library supports multiple data types (tabular, images, texts, time-series) and various machine learning models (traditional ML in Scikit-learn and deep learning models in PyTorch/TensorFlow). OmniXAI integrates a wide range of explanation methods, including model-specific and model-agnostic techniques like feature-attribution, counterfactual, and gradient-based explanations. It offers a user-friendly unified interface for generating explanations with minimal coding and includes a GUI dashboard for visualizing different explanations, providing deeper insights into model decisions. This makes OmniXAI an invaluable tool for enhancing transparency and trust in AI systems across various stages of the ML process, from data exploration and feature engineering to model development and decision-making.

Explaining Black Box AI Models Should Be Stopped- Use Interpretable Models Instead

There is another approach to explainability in AI: Why use black box models at all? There are numerous interpretable models and out-of-the-box machine learning models available that can be used instead. In her influential paper, Cynthia Rudin argues against the reliance on black box machine learning models for high-stakes decision-making in fields like healthcare and criminal justice (Rudin, 2018). She emphasizes that the current trend of developing methods to explain these opaque models is inadequate and potentially harmful.

Rudin advocates for a paradigm shift towards designing inherently interpretable models from the outset. This approach mitigates the risks associated with black box models, which include perpetuating bad practices and causing significant societal harm. The manuscript discusses the fundamental differences between explaining black boxes and using interpretable models, highlighting the critical need to avoid explainable black boxes in high-stakes scenarios.

Rudin identifies the challenges of creating interpretable machine learning models and presents case studies demonstrating how interpretable models can be effectively utilized in domains such as criminal justice, healthcare, and computer vision. These examples offer a safer and more transparent alternative to black box models, ensuring decisions are understandable and justifiable.

By using interpretable models, we can foster greater trust and accountability in AI systems, especially in critical areas where decisions have far-reaching consequences. This shift not only improves the reliability of AI but also aligns with ethical standards, ensuring that technology serves society in a responsible and transparent manner.

Conclusion

Explainable AI (XAI) is transforming the healthcare industry by making AI models more transparent, accountable, and trustworthy. Automated explainability pipelines are crucial for integrating XAI effectively, allowing researchers to focus on their core areas while leveraging advanced AI insights. Features like rich evidence packages, model lineage, bias mitigation, model risk management, and human-AI collaboration are essential components of an effective XAI framework. By incorporating these features, AICU is paving the way for a more transparent, equitable, and efficient healthcare research environment.

References

Allen, 2024
Allen, B. (2024). The promise of explainable AI in digital health for precision medicine: A systematic review. Journal of Personalized Medicine, 14(3), 277. https://doi.org/10.3390/jpm14030277

Di Martino & Delmastro, 2022
Di Martino, F., & Delmastro, F. (2022). Explainable AI for clinical and remote health applications: A survey on tabular and time series data. Artificial Intelligence Review. https://doi.org/10.1007/s10462-022-10304-3

Pesecan & Stoicu-Tivadar, 2023
Pesecan, C. M., & Stoicu-Tivadar, L. (2023). Increasing trust in AI using explainable artificial intelligence for histopathology - An overview. Studies in Health Technology and Informatics, 305, 14-17. https://doi.org/10.3233/SHTI230411

Kokhlikyan, 2020

Kokhlikyan, N., Miglani, V., Martin, M., Wang, E., Alsallakh, B., Reynolds, J., Melnikov, A., Kliushkina, N., Araya, C., Yan, S., & Reblitz-Richardson, O. (2020). Captum: A unified and generic model interpretability library for PyTorch. arXiv preprint arXiv:2009.07896. https://arxiv.org/abs/2009.07896

Yang, 2022

Yang, W., Le, H., Laud, T., Savarese, S., & Hoi, S. C. H. (2022). OmniXAI: A Library for Explainable AI. arXiv preprint arXiv:2206.02239. https://arxiv.org/abs/2206.02239

Rudin, 2018

Rudin, C. (2018). Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. arXiv preprint arXiv:1811.10154. https://arxiv.org/abs/1811.10154

Lundberg, 2017

Lundberg, S. M., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. arXiv preprint arXiv:1705.07874. https://arxiv.org/abs/1705.07874

Ribeiro, 2016

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?": Explaining the Predictions of Any Classifier. arXiv preprint arXiv:1602.04938. https://arxiv.org/abs/1602.04938

Selvaraju, 2017

Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. ICCV 2017. https://arxiv.org/abs/1610.02391

Chen, 2019

Chen, C., Li, O., Tao, C., Barnett, A. J., Su, J., & Rudin, C. (2019). This Looks Like That: Deep Learning for Interpretable Image Recognition. NeurIPS 2019. https://arxiv.org/abs/1806.10574

Grow your impact.
Today is the day to discover your data. Share your insights with the world — and to blow your research community away.
Thank you! You have been subscribed!
Oops! Something went wrong while submitting the form.