
title: | The automatic recognition of facial expressions |
author: | Simon Chenet |
published in: | 2006 |
appeared as: |
Bachelor of Science thesis Knowledge Based Systems group Delft University of Technology |
PDF (4328 KB) |

Abstract
People communicate with each other through spoken words and nonverbal behavior. Verbal communication is used to convey objective information, whereas nonverbal communication is used to convey subjective and affective information.
Due to a number of reasons, confusion and misunderstandings can arise when people communicate with each other.
A verbal dictionary can be used to look up the spelling of a word, sometimes the phoneme representation, the meaning in different contexts and rules of transformation.
Facial expressions play an important role in human communication. The contours of the mouth, eyes and eyebrows play an important role in classification.
The automatic recognition of facial expressions is a difficult problem because of changing light conditions, posture and occlusion. In the past several techniques have been developed such as using templates or splines to find the contours of the mouth. Or to locate special points around the contours of the mouth and use a classifier to put facial expressions in predefined classes (i.e. happiness, sadness, disgust, fear, anger and surprise).
Another method is to use vector flow in video recordings of facial expression. A promising method to classify facial expressions in still pictures is to use ideas from Viola and Jones. They select some basic features from a picture and then use a classifier to select the most promising features to classify faces in predefined classes.
The goal of the project is to design and implement an algorithm which is able to classify a grey-level picture of a front-view facial expression in predefined classes (i.e. 6 basic emotions), with no need of a human pre-processing the picture.