Mattia Furlan

Curriculum
Neuroscience, Technology and Society, XXXIV series
Grant sponsor
UNIPD
Supervisor

Anna Spagnolli
Co-supervisor
s
GianAntonio Susto

 


Project: Non-verbal cues of engagement during video interviews: Third-party assessment, construction and validation of a training and automatic detection
Full text of the dissertation book can be downloaded from: https://www.research.unipd.it/handle/11577/3463842

Abstract: The use of videoconferencing platforms, through which groups of people can communicate at a distance, has increased in recent years and accelerated after the Covid-19 pandemic forcing the use of remote interactions. These interactions include work-related interactions and certainly also job interviews. This type of recruitment had already started before the outbreak of the pandemic but is certainly expected to become more and more popular in the world of work in the future. To this end, it is essential to consider what are the important components to keep in mind when conducting such an interview. It is also essential to understand how to improve and adapt to the new interview tools. In order to provide a useful tool for both candidates and recruiters, the following project was built. The aim is to make explicit the non-verbal cues that characterise engagement in an interaction, in order to recognise them and, in the case of candidates, use them to improve, in the case of recruiters, modify them on themselves and consider them part of the evaluation expressed. My project consisted of the following studies: A preliminary study thought to identify recurrent movement patterns of parties involved in an interaction from the point of view of Physical Mutual Engagement (PME) was conducted. In this study, participants rated the engagement between parties involved in work-related interactions. A content analysis was conducted on the answers given by the participants to the open-ended questions, in order to identify the behaviours cues of the PME. Therefore 57 engagement cues were found, divided into 9 Behaviours and associated with 8 Meanings. A second study was carried out to validate the non-verbal engagement cues identified in the previous study. To do this, a training was constructed and administered to 20 participants involved in a job interview. In order to check the effectiveness of this training, the participants' behaviour was annotated and analysed. The behaviours of Gaze, Nodding and Smiling were identified. It was also found that the training was effective in increasing Looking into the camera and decreasing Looking away. It was also effective in increasing Nodding but not Smiling. Then, a third study was performed to verify whether the parameters found in the first study could effectively improve PME. The videos collected during the second study have been evaluated by an independent commission in order to determine whether the training had an effect on the participants’ engagement. A comparison was then made between pre and post interval videos’ evaluations. It was found that the training constructed during this project was indeed effective in increasing perceived engagement. Subsequently, a fourth study was carried out in order to check whether the behavioural cues annotated during the second study correlated with the engagement rates assessed by the external evaluators during the previous study. To this end, the average engagement scores found in the third study were compared with the four behavioural cues annotated in the second study. A correlation was found between Gaze behaviour and engagement scores. Higher engagement scores corresponded to higher frequencies of Looks into the camera and lower frequencies of Looks away. However, no correlation was found between Nods and engagement scores, but a slight correlation was found between Smiles and engagement scores. Finally, a fifth study was conducted in order to build a model capable of extracting and predicting the nonverbal cues found and tested in our previous studies, using state-of-the-art machine learning algorithms. The annotated frames of the videos were used to train and test the model using a network for facial recognition. Considerations on the better approach to use to predict the considered behaviours are therefore reported.