Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence
Computer Sciences, *human – machine relations, Specific languages, Linguistics
This research explores and evaluates the contribution that facial expressions might have regarding improved comprehension and acceptability in sign language avatars. Focusing specifically on Irish sign language (ISL), the Deaf (the uppercase ‘‘D’’ in the word ‘‘Deaf’’ indicates Deaf as a culture as opposed to ‘‘deaf’’ as a medical condition) community’s responsiveness to sign language avatars is examined. The hypothesis of this is as follows: augmenting an existing avatar with the seven widely accepted universal emotions identified by Ekman (Basic emotions: handbook of cognition and emotion. Wiley, London, 2005) to achieve underlying facial expressions will make that avatar more human-like and improve usability and understandability for the ISL user. Using human evaluation methods (Huenerfauth et al. in Trans Access Comput (ACM) 1:1, 2008), an augmented set of avatar utterances is compared against a baseline set, focusing on two key areas: comprehension and naturalness of facial configuration. The approach to the evaluation including the choice of ISL participants, interview environment, and evaluation methodology is then outlined. The evaluation results reveal that in a comprehension test there was little difference between the baseline avatars and those augmented with emotional facial expression. It was also found that the avatars are lacking various linguistic attributes.
Smith, R.G. and Nolan, B. (2016). Emotional facial expressions in synthesised sign language avatars: a manual evaluation. Universal Access in the Information Society, 15(4), 567-576. DOI:10.1007/s10209-015-0410-7