Publications

An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans

Published in 2018 IEEE Conference on Computational Intelligence and Games, 2018

In this paper, we investigate the communication of status related social signals by means of a virtual human’s eye gaze. We constructed a cross-domain verbal-conceptual computational model of gaze for virtual humans to facilitate the display of social status. We describe the validation of the model’s parameters, including the length of eye contact and gazes, movement velocity, equilibrium response, and head and body posture. In a first set of studies, conducted on Amazon Mechanical Turk using pre- recorded video clips of animated characters, we found statistically significant differences in how the characters’ status was rated based on the variation in social status. In a second step based on these empirical findings, we designed an interactive system that incorporates dynamic eye tracking and spoken dialog, along with real-time control of a virtual character. We evaluated the model using a presential, interactive scenario of a simulated hiring interview. Corroborating our previous finding, the interactive study yielded significant differences in perception of status were found (p = .046). Thus, we believe status is an important aspect of dramatic believability, and accordingly, this paper presents our social eye gaze model for realistic procedurally animated characters and shows its efficacy.

Recommended citation: Nixon, M., DiPaola, S., & Bernardet, U. (2018). An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans. In Forthcoming. Maastricht, The Netherlands: IEEE. https://michaelnixon.github.io/files/Nixon_DiPaola_Bernardet__eye-gaze-model.pdf

Saliency-Based Artistic Abstraction With Deep Learning and Regression Trees

Published in Journal of Imaging Science and Technology, 2017

Abstraction in art often reflects human perception—areas of an artwork that hold the observer’s gaze longest will generally be more detailed, while peripheral areas are abstracted, just as they are mentally abstracted by humans’ physiological visual process. The authors’ artistic abstraction tool, Salience Stylize, uses Deep Learning to predict the areas in an image that the observer’s gaze will be drawn to, which informs the system about which areas to keep the most detail in and which to abstract most. The planar abstraction is done by a Random Forest Regressor, splitting the image into large planes and adding more detailed planes as it progresses, just as an artist starts with tonally limited masses and iterates to add fine details, then completed with our stroke engine. The authors evaluated the aesthetic appeal and effectiveness of the detail placement in the artwork produced by Salience Stylize through two user studies with 30 subjects.

Recommended citation: Shakeri, H., Nixon, M., & DiPaola, S. (2017). Saliency-Based Artistic Abstraction With Deep Learning and Regression Trees. Journal of Imaging Science and Technology, 61(5). https://doi.org/10.2352/J.ImagingSci.Technol.2017.61.6.060402

Integrating Cognitive Architectures into Virtual Character Design

Published in IGI Global, 2016

Integrating Cognitive Architectures into Virtual Character Design presents emerging research on virtual character artificial intelligence systems and procedures and the integration of cognitive architectures.

Recommended citation: Turner, J., Nixon, M., Bernardet, U., & DiPaola, S. (Eds.). (2016). Integrating Cognitive Architectures into Virtual Character Design. Hershey, PA: IGI Global. http://www.igi-global.com/book/integrating-cognitive-architectures-into-virtual/146983

M+M: A Novel Middleware for Distributed, Movement Based Interactive Multimedia Systems

Published in Proceedings of the 3rd International Symposium on Movement and Computing, 2016

Embodied interaction has the potential to provide users with uniquely engaging and meaningful experiences. m+m: Movement + Meaning middleware is an open source software framework that enables users to construct real-time, interactive systems that are based on movement data. The acquisition, processing, and rendering of movement data can be local or distributed, real-time or off-line. Key features of the m+m middleware are a small footprint in terms of computational resources, portability between different platforms, and high performance in terms of reduced latency and increased bandwidth. Examples of systems that can be built with m+m as the internal communication middleware include those for the semantic interpretation of human movement data, machine-learning models for movement recognition, and the mapping of movement data as a controller for online navigation, collaboration, and distributed performance.

Recommended citation: Bernardet, U., Adhia, D., Jaffe, N., Wang, J., Nixon, M., Alemi, O., ... Schiphorst, T. (2016). M+M: A Novel Middleware for Distributed, Movement Based Interactive Multimedia Systems. In Proceedings of the 3rd International Symposium on Movement and Computing (pp. 21:1-21:9). New York, NY, USA: ACM. http://doi.org/10.1145/2948910.2948942

Digitisation Fundamentals

Published in Doing Digital Humanities: Practice, Training and Research, 2016

Digital Humanities is rapidly evolving as a significant approach to/method of teaching, learning and research across the humanities. This is a first-stop book for people interested in getting to grips with digital humanities whether as a student or a professor. The book offers a practical guide to the area as well as offering reflection on the main objectives and processes.

Recommended citation: Davies, R., & Nixon, M. (2016). Digitisation Fundamentals. In R. Siemens, R. Lane, & C. Crompton (Eds.), Doing Digital Humanities: Practice, Training and Research (163-176). London, UK: Routledge. (Invited) https://www.routledge.com/Doing-Digital-Humanities-Practice-Training-Research/Crompton-Lane-Siemens/p/book/9781138899445

SL-Bots: Automated and Autonomous Performance in Second Life

Published in New Opportunities for Artistic Practice in Virtual Worlds, 2015

Although virtual worlds continue to grow in popularity, a substantial amount of research is needed to determine best practices in virtual spaces. The artistic community is one field where virtual worlds can be utilized to the greatest effect.

Recommended citation: Turner, J. O., Nixon, M., & Bizzocchi, J. (2015). SL-Bots: Automated and Autonomous Performance in Second Life. In D. Doyle (Ed.), New Opportunities for Artistic Practice in Virtual Worlds. (pp. 263-289). Hershey, PA: IGI Global. (Editor reviewed). https://www.igi-global.com/book/new-opportunities-artistic-practice-virtual/123122. https://michaelnixon.github.io/files/Turner_Nixon_Bizzocchi__SL_Bots_Chapter_August_27_2014.pdf

The Role of Micronarrative in the Design and Experience of Digital Games

Published in Proceedings of Digital Games Research Association Conference (DiGRA), 2013

Download paper here

Recommended citation: Bizzocchi, J, Nixon, M, DiPaola, S, & Funk, N. (2013). The Role of Micronarrative in the Design and Experience of Digital Games. Proceedings of Digital Games Research Association Conference (DiGRA), Atlanta, Georgia, 16pp. http://www.digra.org/digital-library/publications/the-role-of- micronarrative-in-the-design-and-experience-of-digital-games/

Press X for Meaning: Interaction Leads to Identification in Heavy Rain

Published in Proceedings of Digital Games Research Association Conference (DiGRA), 2013

Download paper here

Recommended citation: Nixon, M, Bizzocchi, J. (2013). Press X for Meaning: Interaction Leads to Identification in Heavy Rain. Proceedings of Digital Games Research Association Conference (DiGRA), Atlanta, Georgia, 14pp. http://www.digra.org/digital-library/publications/press-x-for-meaning-interaction- leads-to-identification-in-heavy-rain/

DelsArtMap: Applying Delsarte’s Aesthetic System to Virtual Agents

Published in 10th International Conference on Intelligent Virtual Agents, 2010

Download paper here

Recommended citation: Nixon, M., Pasquier, P., & Seif El-Nasr, M. (2010). DelsArtMap: Applying Delsarte's Aesthetic System to Virtual Agents. In Lecture Notes in Computer Science (Vol. 6356, pp. 139-145). Presented at 10th International Conference on Intelligent Virtual Agents, Philadelphia: Springer. http://michaelnixon.github.io/files/delsartmap.pdf

Believable Characters

Published in Handbook of Multimedia for Digital Entertainment and Arts, 2009

The interactive entertainment industry is one of the fastest growing industries in the world. In 1996, the U.S. entertainment software industry reported $2.6 billion in sales revenue, this figure has more than tripled in 2007 yielding $9.5 billion in revenues [1]. In addition, gamers, the target market for interactive entertainment products, are now reaching beyond the traditional 8–34 year old male to include women, Hispanics, and African Americans [2]. This trend has been observed in several markets, including Japan, China, Korea, and India, who has just published their first international AAA title (defined as high quality games with high budget), a 3D third person action game: Ghajini – The Game [3]. The topic of believable characters is becoming a central issue when designing and developing games for today’s game industry. While narrative and character were considered secondary to game mechanics, games are currently evolving to integrate characters, narrative, and drama as part of their design. One can see this pattern through the emergence of games like Assassin’s Creed (published by Ubisoft 2008), Hotel Dusk (published by Nintendo 2007), and Prince of Persia series (published by Ubisoft), which emphasized character and narrative as part of their design.

Recommended citation: Seif El-Nasr, M., Bishko, L., Zammitto, V., Nixon, M., Vasiliakos, A. V., & Wei, H. (2009). Believable Characters. In B. Furht (Ed.), Handbook of Multimedia for Digital Entertainment and Arts (pp. 497-528). New York, NY: Springer US. (Editor reviewed) http://www.springer.com/gp/book/9780387890234