PathAI Publications

Interpretability analysis on a pathology foundation model reveals biologically relevant embeddings across modalities

Written by Admin | Jun 21, 2024 4:00:00 AM

Study Background

Pathology is the study of microscopic inspection of tissue. Machine learning (ML) has been applied to pathology images for various tasks.Interpretability is crucial for ML in medical imaging for building decision-makers’ trust, debugging silent failure modes and shortcut-learning, and reducing the risks of catastrophic model failures in real-world deployments. Work on interpretability in pathology has focused on assigning spatial credit to Whole slide image (WSI)-level predictions, computing human-interpretable features from model output heatmaps and visualization of multi-head self-attention values on image patches. Mechanistic interpretability has been explored in detail for large language models (LLMs)1 but remained underexplored for vision models, especially in the field of pathology.

 

Conference

ICML 2024

Partner

Gilead Science

View Poster