Explainable Machine Learning

GianAntonio Susto
Department of Information Engineering

In the past recent years, automatic decision making has been enabled by the advancement of Machine Learning (ML) in several important fields like medical diagnoses, insurances, security, financial trading. The foundational principles underlying some ML approaches are somehow difficult to understand, given the complexity of some black-box models. The lack of interpretability raises several questions: can we trust the outcome of a ML module? Is the outcome of the ML module fair? What really matters in a ML model? Such questions are fundamental for the two main classes of researchers using ML:
- ‘ML Users’, for example in the field of psychology, for which ML models are used as tools and where interpretability can be exploited for validating hypothesis and for designing new experiments;
- ‘ML Developers’, for example in the field of computer science, for which the interpretability of a model can be exploited for understanding complex models. The aim of this course is to motivate the importance of interpretability in the context of Machine Learning and to present some ML algorithms and procedure that have a certain level of interpretability

- Importance of Interpretability in Machine Learning;
- Taxonomy of Intepretabilty;
- Interpretable models;
- Model-agnostic Methods for Interpretability

Introductory reading
- C. Molnar Interpretable Machine Learning: A Guide for Making Black Box Models Explainable https://christophm.github.io/interpretable-ml-book/
Law & Security Review 32, pp. 383-402.

Course requirements

Examination modality

Course material, enrollment and last minute notifications
Made available by the teacher at this Moodle address

15 Mar 2021, 10:30-12:30
16 Mar 2021, 10:30-12:30
17 Mar 2021, 10:30-12:30

Room 1BC50, Dept of Mathematics; Zoom link in case of quarantined students will be in the Moodle page of the course.

<< Courses in 2020-2021