EMPATI: EMotion Perception Across mulTimodal Interaction
My broad research interest lies in the amazing capabilities that humans have to convey a very wide range of meanings with only a limited set of available phonemes and with very small changes in the speech signal. I find it extremely fascinating how such minimal changes – which very often a third party listener barely notices – are perceived and immediately and unconsciously reacted upon by the listener they are directed to.
The question that drives my research then is: what happens when speech is conveyed by a machine, for example a computer or a robot? In my EDGE project at the Sigmedia lab I will study the effect of mismatched audio-visual emotional expression in Human-Machine Interaction on the human interaction partner. What happens if an avatar’s face is smiling, but its voice is not?
Before joining the EDGE project, I obtained a PhD in Psychology from Plymouth University, with research on the effect of different voice characteristics – accent, prosody, naturalness – on trust towards virtual agents and robots. I also hold a Master’s in Phonetics and Phonology from the University of York and a Bachelor’s in Languages and Linguistic Sciences from the Catholic University of Milan.