Detecting Depression Severity by Interpretable Representations of Motion Dynamics

Recent breakthroughs in deep learning using automated measurement of face and head motion have made possible the first objective measurement of depression severity. While powerful, deep learning approaches lack interpretability. We developed an interpretable method of automatically measuring depress...

Full description

Saved in:
Bibliographic Details
Published in:2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018) pp. 739 - 745
Main Authors: Kacem, Anis, Hammal, Zakia, Daoudi, Mohamed, Cohn, Jeffrey
Format: Conference Proceeding
Language:English
Published: IEEE 01-05-2018
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent breakthroughs in deep learning using automated measurement of face and head motion have made possible the first objective measurement of depression severity. While powerful, deep learning approaches lack interpretability. We developed an interpretable method of automatically measuring depression severity that uses barycentric coordinates of facial landmarks and a Lie-algebra based rotation matrix of 3D head motion. Using these representations, kinematic features are extracted, preprocessed, and encoded using Gaussian Mixture Models (GMM) and Fisher vector encoding. A multi-class SVM is used to classify the encoded facial and head movement dynamics into three levels of depression severity. The proposed approach was evaluated in adults with history of chronic depression. The method approached the classification accuracy of state-of-the-art deep learning while enabling clinically and theoretically relevant findings. The velocity and acceleration of facial movement strongly mapped onto depression severity symptoms consistent with clinical data and theory.
DOI:10.1109/FG.2018.00116