An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans

Published in 2018 IEEE Conference on Computational Intelligence and Games, 2018

In this paper, we investigate the communication of status related social signals by means of a virtual human’s eye gaze. We constructed a cross-domain verbal-conceptual computational model of gaze for virtual humans to facilitate the display of social status. We describe the validation of the model’s parameters, including the length of eye contact and gazes, movement velocity, equilibrium response, and head and body posture. In a first set of studies, conducted on Amazon Mechanical Turk using pre- recorded video clips of animated characters, we found statistically significant differences in how the characters’ status was rated based on the variation in social status. In a second step based on these empirical findings, we designed an interactive system that incorporates dynamic eye tracking and spoken dialog, along with real-time control of a virtual character. We evaluated the model using a presential, interactive scenario of a simulated hiring interview. Corroborating our previous finding, the interactive study yielded significant differences in perception of status were found (p = .046). Thus, we believe status is an important aspect of dramatic believability, and accordingly, this paper presents our social eye gaze model for realistic procedurally animated characters and shows its efficacy.

Paper available here

Recommended citation: Shakeri, H., Nixon, M., & DiPaola, S. (2017). Saliency-Based Artistic Abstraction With Deep Learning and Regression Trees. Journal of Imaging Science and Technology, 61(5).