Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots. / Tsiourti, Christiana; Weiss, Astrid; Wac, Katarzyna; Vincze, Markus.

I: International Journal of Social Robotics, Bind 11, Nr. 4, 04.02.2019, s. 555–573.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Tsiourti, C, Weiss, A, Wac, K & Vincze, M 2019, 'Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots', International Journal of Social Robotics, bind 11, nr. 4, s. 555–573. https://doi.org/10.1007/s12369-019-00524-z

APA

Tsiourti, C., Weiss, A., Wac, K., & Vincze, M. (2019). Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots. International Journal of Social Robotics, 11(4), 555–573. https://doi.org/10.1007/s12369-019-00524-z

Vancouver

Tsiourti C, Weiss A, Wac K, Vincze M. Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots. International Journal of Social Robotics. 2019 feb. 4;11(4):555–573. https://doi.org/10.1007/s12369-019-00524-z

Author

Tsiourti, Christiana ; Weiss, Astrid ; Wac, Katarzyna ; Vincze, Markus. / Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots. I: International Journal of Social Robotics. 2019 ; Bind 11, Nr. 4. s. 555–573.

Bibtex

@article{3ea0b418a7a042d98dc400d425b2063b,
title = "Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots",
abstract = "Humanoid social robots have an increasingly prominent place in today's world. Their acceptance in social and emotional human--robot interaction (HRI) scenarios depends on their ability to convey well recognized and believable emotional expressions to their human users. In this article, we incorporate recent findings from psychology, neuroscience, human--computer interaction, and HRI, to examine how people recognize and respond to emotions displayed by the body and voice of humanoid robots, with a particular emphasis on the effects of incongruence. In a social HRI laboratory experiment, we investigated contextual incongruence (i.e., the conflict situation where a robot's reaction is incongrous with the socio-emotional context of the interaction) and cross-modal incongruence (i.e., the conflict situation where an observer receives incongruous emotional information across the auditory (vocal prosody) and visual (whole-body expressions) modalities). Results showed that both contextual incongruence and cross-modal incongruence confused observers and decreased the likelihood that they accurately recognized the emotional expressions of the robot. This, in turn, gives the impression that the robot is unintelligent or unable to express ``empathic'' behaviour and leads to profoundly harmful effects on likability and believability. Our findings reinforce the need of proper design of emotional expressions for robots that use several channels to communicate their emotional states in a clear and effective way. We offer recommendations regarding design choices and discuss future research areas in the direction of multimodal HRI.",
author = "Christiana Tsiourti and Astrid Weiss and Katarzyna Wac and Markus Vincze",
year = "2019",
month = feb,
day = "4",
doi = "10.1007/s12369-019-00524-z",
language = "English",
volume = "11",
pages = "555–573",
journal = "International Journal of Social Robotics",
issn = "1875-4791",
publisher = "Springer",
number = "4",

}

RIS

TY - JOUR

T1 - Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots

AU - Tsiourti, Christiana

AU - Weiss, Astrid

AU - Wac, Katarzyna

AU - Vincze, Markus

PY - 2019/2/4

Y1 - 2019/2/4

N2 - Humanoid social robots have an increasingly prominent place in today's world. Their acceptance in social and emotional human--robot interaction (HRI) scenarios depends on their ability to convey well recognized and believable emotional expressions to their human users. In this article, we incorporate recent findings from psychology, neuroscience, human--computer interaction, and HRI, to examine how people recognize and respond to emotions displayed by the body and voice of humanoid robots, with a particular emphasis on the effects of incongruence. In a social HRI laboratory experiment, we investigated contextual incongruence (i.e., the conflict situation where a robot's reaction is incongrous with the socio-emotional context of the interaction) and cross-modal incongruence (i.e., the conflict situation where an observer receives incongruous emotional information across the auditory (vocal prosody) and visual (whole-body expressions) modalities). Results showed that both contextual incongruence and cross-modal incongruence confused observers and decreased the likelihood that they accurately recognized the emotional expressions of the robot. This, in turn, gives the impression that the robot is unintelligent or unable to express ``empathic'' behaviour and leads to profoundly harmful effects on likability and believability. Our findings reinforce the need of proper design of emotional expressions for robots that use several channels to communicate their emotional states in a clear and effective way. We offer recommendations regarding design choices and discuss future research areas in the direction of multimodal HRI.

AB - Humanoid social robots have an increasingly prominent place in today's world. Their acceptance in social and emotional human--robot interaction (HRI) scenarios depends on their ability to convey well recognized and believable emotional expressions to their human users. In this article, we incorporate recent findings from psychology, neuroscience, human--computer interaction, and HRI, to examine how people recognize and respond to emotions displayed by the body and voice of humanoid robots, with a particular emphasis on the effects of incongruence. In a social HRI laboratory experiment, we investigated contextual incongruence (i.e., the conflict situation where a robot's reaction is incongrous with the socio-emotional context of the interaction) and cross-modal incongruence (i.e., the conflict situation where an observer receives incongruous emotional information across the auditory (vocal prosody) and visual (whole-body expressions) modalities). Results showed that both contextual incongruence and cross-modal incongruence confused observers and decreased the likelihood that they accurately recognized the emotional expressions of the robot. This, in turn, gives the impression that the robot is unintelligent or unable to express ``empathic'' behaviour and leads to profoundly harmful effects on likability and believability. Our findings reinforce the need of proper design of emotional expressions for robots that use several channels to communicate their emotional states in a clear and effective way. We offer recommendations regarding design choices and discuss future research areas in the direction of multimodal HRI.

UR - http://www.mendeley.com/research/multimodal-integration-emotional-signals-voice-body-context-effects-incongruence-emotion-recognition

U2 - 10.1007/s12369-019-00524-z

DO - 10.1007/s12369-019-00524-z

M3 - Journal article

VL - 11

SP - 555

EP - 573

JO - International Journal of Social Robotics

JF - International Journal of Social Robotics

SN - 1875-4791

IS - 4

ER -

ID: 215472846