Facial Expression Mapping
MiFace is part of an ongoing research and software development project designed to expand the repertoire of facial expressions mapped to parameterized movements in the human face. The 3D model of a human face and head in this repository provides individual blend shapes that are based on the Facial Action Coding Systems (FACS), which in turn is based on craniofacial anatomy. With scripts such as the MEL (Maya embedded language) examples included in this repo, simultated facial expressions can be computationally generated. Opening the model and running the scripts requires an installation of Autodesk's Maya. FACS movements are accurately represented, and are reasonably realistic. The model includes a low resolution texture UV mapped to the surface of the face. The eyes, jaw and head can also be manipulated separately to generate a broader range of nonverbal behaviors. Using a 3D model of the human face capable of emulating muscle activations, specifically the Action Units (AUs) provided by FACS, a broad range of potential expressions can be rapidly and reliably generated for testing.
The MiFace model has been used to generate candidate facial expressions for studies of social signal processing with crowd participants from Amazon's Mechanical Turk. In the studies, meaningful expressions are identified, and natural language labels denoting their associated social or emotional signal values are assigned. Applications of a rich facial expression lexicon include improving the functionality of automatic recognition systems, supporting the development of affective virtual humans, and diagnostics and treatment in psychology.
A paper outlining preliminary study results, along with an appendix detailing the expressions identified along with AU configuration <-> label mappings can be found here. Companion natural language processing code for analyzing free response expression label sets can be found in the FreeRes-nlp project.