Abstract

Designing systems that are able to interact with people is a complex process. An important aspect of this problem is understanding human emotions and responding to them in a human way. The fact that people themselves often have problems in recognizing emotions properly makes the task even more difficult. There are currently numerous robust and well-functioning systems that can recognize human faces, and locate the eyes, nose and mouth. However, these systems miss the so-called meta-information in the form of a detailed description of the face, which can lead to a deeper understanding of facial expressions. This information should not be underestimated, since facial expressions contain a large amount of non-verbal information. Facial expressions are important in human communication because much information is transmitted through non-verbal communication. A system that can automatically detect human emotion would be useful in areas such as human-computer interaction, psychology, sociology and other areas. Such a system would enable automated analysis of stress, vertigo or aggression levels. Moreover, it would also be useful in monitoring public spaces, resulting in higher security. The aim of this project is to design and implement a robust system that can recognize and analyze emotions from human faces. The system should be fully automated so that the user does not need to setup any parameters in order to make the system run correctly. The expected output is the textual description of the emotion. The analyzed face parameters are also forwarded to the animation component, where the facial expression is animated on an avatar.

Reference

Byrtus, M. (2016). Avatar control by automatically detected face interest points [Diploma Thesis, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2016.26204