List DILS invites you to their event

Séminaire Caterina Urban: Interpretability Aware Verification of Machine Learning Software

About this event

Interpretability-Aware Verification of Machine Learning Software


Machine learning (ML) software is increasingly being employed in high stakes or sensitive applications. This poses important safety, privacy, and non-discrimination challenges. As a consequence, research in ML verification rapidly gained popularity and demand for interpretable ML models is more and more pronounced. Interpretability and verification are typically seen as orthogonal concerns. Research in ML interpretability is mainly carried out in the ML community, while research in ML verification is mostly carried out in the formal methods community, without much communication and exchanges between these research areas. In this talk, we advocate for closing this gap by presenting our recent and ongoing work on interpretability-aware verification of ML software. On the one hand, we show how verification can aid interpretability by providing a new feature importance measure for support vector machines. On the other hand, we discuss how interpretability can guide verification by proposing a new saliency-guided robustness verification problem for deep neural networks. We conclude with plans and perspectives on future work.

Hosted by

  • Team member
    T
    Julien Girard-Satabin CEA

List DILS

Le CEA est un organisme de recherche sur la défense et la sécurité, les énergies nucléaire et renouvelables, la recherche technologique pour l’industrie et la recherche fondamentale en sciences de la matière et de la vie.