UTAIL

English UT

Home

News

Member

Mission

Research

Publications

Joining Lab

Access

Links

Square
ETedgeedge

et

Year: 2010-
Member:
Takashi Isezaki
Kenji Suzuki
Partners:
Private Company (Cosmetics)
Tags:
- Cognitive Robotics
- Cybernics
- Augmented Human

 
3D Face Analysis
Depth Image based Analysis of Facial Expressions

 

We propose a method of spatial and time-series analysis of facial expressions using a real-time depth image, particularly for the measurement of head orientation and for the dynamic analysis of facial expression. The method is based on an evaluation of the 3-dimensional shape of the face and is able to assess facial expressions, independent of the orientation of the head. It aims to extract psychological and physiological features from the physical features of the face’s shape. This research focuses on the characteristics of transitions detected during changes of facial expression, particularly using dynamical spatial and time-series analysis. This paper describes the development of the sensing system, the experimental results of head orientation measurements, and the results of facial expression analysis.

Our objective in this research was to quantify the transitional processes between facial expressions by performing dynamic facial expression analysis based on an evaluation of face shape. We thus propose the direct acquisition of face shape using a range image sensor. The proposed method can be used in real life situations and can obtain physical feature information that does not depend on psychological features. The relationships among physical features, physiological features, and psychological features of natural facial expressions were then investigated.

We compared the accuracy of the head orientation measurement of the proposed method to that of motion capture and traditional 2D image processing. Although our method uses only limited representative feature vectors, we obtained positive results. Other feature vectors were considered so as to improve the performance of the proposed method. For example, recognizing the eyes using 3-dimensional shape is difficult; however, the eyes can be recognized from the depth image using a simple image processing step. In future studies, we will investigate potential feature vectors without increasing computational costs.

In the dynamic facial expression analysis experiment, wefocused on smiling. We were able to obtain the smile volume and velocity data for each subject. In addition to these data, other shape features such as the angle of the mouth can be quantified. Comparing the analysis results to the subject’s own subjective evaluation, we can describe the components of a “true smile”. Thus, the proposed method can be used to analyze the relationship between physical facial features, psychological features, such as impressions and physiological features.


 


This work is partly supported by Grants-in-Aid for Scientific Research, MEXT, Japan.

     
Publications
  • Isezaki, T., Suzuki, K., "Depth Image based Analysis of Facial Expressions and Head Orientation," Proc. of IEEE International Conference on System, Man and Cybernetics, pp. 2537-2542, 2011.
   
     
Related Projects

 


  © 2005-2011 Artificial Intelligent Laboratory, University of Tsukuba, Japan