KEYWORDS:
ABSTRACT:
The purpose of the facial expression branch of the
multimedia project at KGS is to give automatic feedback to a person in
a training situation, based on his or her facial expressions.
The automation possibilities of this facial expression
branch are analysed. The conclusion is that there does not exist one best
method which can solve all the problems of all the processes. Different
methods should all be used together to yield the best results.
Based on this analysis a model is developed,
which can help in combining the results of different methods. In this model
the total process of automation is divided in three subprocesses: measurement,
coding and interpretation.
Part of the coding process was implemented in an
expert system. The coding was based on a coding system developed for human
observers, the Facial Coding System (FACS). Based on this implementation
and the total analysis, all steps of the coding process were derived. Through
this steps, all kinds of measurements of visible facial action were converted
into a code based on the facial muscles. These steps were incorporated
in the model.
The model makes clear which aspect of facial information
are wanted for interpretation, which aspects should be taken out and how
this can be done.