Explainable Machine Learning

Teacher
GianAntonio Susto
Department of Information Engineering
gianantonio.susto[at]dei.unipd.it
ING-INF/04

Aim
In the past recent years, automatic decision making has been enabled by the advancement of Machine Learning (ML) in several important fields like medical diagnoses, insurances, security, financial trading. The foundational principles underlying some ML approaches are somehow difficult to understand, given the complexity of some black-box models. The lack of interpretability raises several questions: can we trust the outcome of a ML module? Is the outcome of the ML module fair? What really matters in a ML model? Such questions are fundamental for the two main classes of researchers using ML:
- ‘ML Users’, for example in the field of psychology, for which ML models are used as tools and where interpretability can be exploited for validating hypothesis and for designing new experiments;
- ‘ML Developers’, for example in the field of computer science, for which the interpretability of a model can be exploited for understanding complex models. The aim of this course is to motivate the importance of interpretability in the context of Machine Learning and to present some ML algorithms and procedure that have a certain level of interpretability

Syllabus
- Importance of Interpretability in Machine Learning;
- Taxonomy of Intepretabilty;
- Interpretable models;
-Model-agnostic Methods for Interpretability

Introductory reading
- C. Molnar Interpretable Machine Learning: A Guide for Making Black Box Models Explainable https://christophm.github.io/interpretable-ml-book/
Law & Security Review 32, pp. 383-402.

Course requirements
None

Examination modality
None

Course material, enrollment and last minute notifications
Made available by the teacher at this Moodle address

Schedule
18 Mar 2020, 9:30-12:30 (new date)
19 Mar 2020, 9:30-12:30 (new date)

Location
Via Zoom; the URL of the zoom meeting is posted in the Moodle space of the course

<< Courses in 2019-2020