Mattia Carletti

Ritratto Mattia Carletti

Curriculum
Computer Science and Innovation for Societal Challenges, XXXV series
Grant sponsor

UNIPD
Supervisor

GianAntonio Susto
Co-supervisor

Anna Spagnolli
Contact
mattia.carletti@unipd.it

Project description

In the era of the 4th Industrial Revolution, smart factories heavily rely on Big Data Analytics to extract business value from industrial data. This leads to the exploitation of Machine Learning techniques which provide automatic and intelligent tools in support of decision-makers. Examples of applications are Anomaly Detection, for the automatic identification of anomalies in the collected data, and Predictive Maintenance, for the optimal scheduling of maintenance interventions based on the predicted health status of the equipment. Thanks to the recent extraordinary technological developments in the field of processing devices, complex models such as Deep Neural Networks can be used to achieve state-of-the-art performance, at the price of a reduced understanding of the model's inner workings. This constitutes an obstacle to the massive adoption of intelligent algorithms since the lack of comprehension may have a detrimental effect on human trust in Machine Learning systems. Therefore, the need for interpretable models is urgent especially in the context of Industry 4.0, where a close human-machine interaction is a key factor.
The main goal of my research project is to enhance the interpretability of Machine Learning models, with a major focus on smart manufacturing applications. The aforementioned objective will be pursued along with two main research directions:
- definition of methods to improve the interpretability of existing black box models commonly used to solve Anomaly Detection and Predictive Maintenance tasks (e.g. Isolation Forest and Deep Neural Networks);
- design of intrinsically interpretable models according to the transparent box design principle.