Animating digital characters has an important role in computer assisted experiences, from video games to movies to interactive robotics. A critical component of digital character interaction is the animation of the human face. Here we explore a data-driven method to produce variation in animated smiles. We define a low-dimensional parameter space for learning based on key feature points of the face, which generalizes to arbitrary digital models. We perform a large-scale user study to annotate a systematic sweep of faces, and train a non-parametric classifier to predict the level of perceived happiness. This model is tuned to balance between precision and the variation in its predictions. New happy faces are then sampled from this model, resulting in a variety of generated faces that display a targeted level of happiness. This diversity can allow rich interactions with digital characters to be built automatically, without the need for hand-crafted expressions.
Sohre, Nick; Adeagbo, Moses; Guy, Stephen.
Data-Driven Variation for Virtual Facial Expressions.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.