6533b854fe1ef96bd12af55b
RESEARCH PRODUCT
Automated Characterization of Mouth Activity for Stress and Anxiety Assessment
Anastasia PampouchidouManolis TsiknakisFan YangPanagiotis G. SimosFabrice MeriaudeauKostas MariasFranco ChiarugiM. Pediaditissubject
[ INFO ] Computer Science [cs]Computer scienceSpeech recognitionFeature extractionautomatic assessmentComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONImage processing02 engineering and technologymouth gesture recognition[ INFO.INFO-CV ] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]Yawn[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]Correlation03 medical and health sciencesstress0302 clinical medicineRobustness (computer science)Stress (linguistics)[ INFO.INFO-TI ] Computer Science [cs]/Image Processing0202 electrical engineering electronic engineering information engineeringmedicine[INFO]Computer Science [cs][ INFO.INFO-AI ] Computer Science [cs]/Artificial Intelligence [cs.AI]Facial expression[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]anxietyimage processingRecognition[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV][SPI.OPTI]Engineering Sciences [physics]/Optics / PhotonicAnxiety020201 artificial intelligence & image processing[ SPI.OPTI ] Engineering Sciences [physics]/Optics / Photonicmedicine.symptom030217 neurology & neurosurgerydescription
International audience; Non-verbal information portrayed by human facial expression, apart from emotional cues also encompasses information relevant to psychophysical status. Mouth activities in particular have been found to correlate with signs of several conditions; depressed people smile less, while those in fatigue yawn more. In this paper, we present a semi-automated, robust and efficient algorithm for extracting mouth activity from video recordings based on Eigen-features and template-matching. The algorithm was evaluated for mouth openings and mouth deformations, on a minimum specification dataset of 640x480 resolution and 15 fps. The extracted features were the signals of mouth expansion (openness estimation) and correlation (deformation estimation). The achieved classification accuracy reached 89.17%. A second series of experimental results, for the preliminary evaluation of the proposed algorithm in assessing stress/anxiety, took place using an additional dataset. The proposed algorithm showed consistent performance across both datasets, which indicates high robustness. Furthermore, normalized openings per minute, and average openness intensity were extracted as video-based features, resulting in a significant difference between video recordings of stressed/anxious versus relaxed subjects.
year | journal | country | edition | language |
---|---|---|---|---|
2016-10-04 |