This item is available under a Creative Commons License for non-commercial use only
It has been shown that humans are sensitive to the portrayal of emotions for virtual characters. However, previous work in this area has often examined this sensitivity using extreme examples of facial or body animation. Less is known about how attuned people are at recognizing emotions as they are expressed during conversational communication. In order to determine whether body or facial motion is a better indicator for emotional expression for game characters, we conduct a perceptual experiment using synchronized full-body and facial motion-capture data. We find that people can recognize emotions from either modality alone, but combining facial and body motion is preferable in order to create more expressive characters.
Ennis, C., Hoyet, L., Egges, A. & McConnell, R. (2013). Emotion capture: emotionally expressive characters for games. Proceedings of MIG'13: Motion in Games, pp.53-60. doi:10.1145/2522628.2522633