Plastic Surgery Research Council

Back to 2019 Abstracts


Emotional Evaluation of Outcomes in Facial Reanimation Surgery Using Artificial Intelligence
Thanapoom Boonipat, MD1, Malke Assad, MD1, Jason Lin, BS1, Mitchell Stotland, MD2, Samir Mardini, MD1.
1Mayo Clinic, Rochester, MN, USA, 2Sidra Medical, Doha, Qatar.

PURPOSE:
Facial reanimation surgery outcome assessment has primarily been accomplished using traditional methods such as subjective scoring and measurement of symmetry. This does not provide information about emotional expression, which is the key function of the face, and the ultimate goal of facial reanimation surgery. Advances in deep learning and artificial intelligence have led to the development of software that can quantify the proportion of different emotions expressed during a facial expression. Using this type of software analysis, we have evaluated the outcomes of facial reanimation surgery on the emotions expressed by a sample of smiling patients. We believe this instantaneous evaluation of emotional expression will lead to a more meaningful assessment of patient function and outcomes. METHODS:
Videos clips of patients who underwent facial reanimation surgery were analyzed, both preoperatively and at least 6-7 months post surgery. FaceReader software from Noldus Information Technology (Wageningen, The Netherlands) was utilized to analyze 2 second videos in which patients were instructed to smile without showing teeth, pre and post operatively. Neutral videos of the patients, and smile/neutral videos of healthy volunteers were also analyzed as controls. The classification of facial expressions by the software is done by training an artificial neural network, using over 10,000 images that have been manually annotated by trained experts. The software detects signs of the 6 cardinal expressions of emotion, and then provides data for proportion of each emotion being expressed by any given facial movement. RESULTS:
Demographics data and results are shown in Exhibit 1. The average happy emotion detected in neutral videos increased from 2.47%to 3.19%(p=0.94) pre to post operatively, and in smile videos increased significantly(p=0.0006). Bilateral reconstructions had lower scores (average 2.85% preoperatively vs10% happy postoperatively with smile) compared to left(10.9%to44%), and right (12%to35%). Gracilis muscle transfer had the best reconstructive outcomes, while temporalis have almost equally good outcomes(Exhibit 1). Neutral emotions comprised the majority of the other emotion detected during smile(60%preoperatively, 40.7%postoperatively looking at all patients combined). Eight normal subjects aged 26-28 were also filmed during smile and neutral expression, and their resultant videos analyzed. Happy emotion was 53% (SD21.2) during smile vs 0.7% (SD2) during neutral, compared to the average 38% happy detected following gracilis transfer reanimation smile. There was good correlation between the changes in Terzis score and happy emotion pre and post operatively (Exhibit 1, Spearman correlation 0.5127(p=0.0208)) CONCLUSION:
This study demonstrates the utility of a facial emotional recognition software to analyze the outcomes of facial reanimation surgery. We show good correlation in the increased in happy emotion detected during smile and the increase in Terzis score in the facial reanimation patients. This tool can be used to instantaneously give feedback to both surgeons and patients, allow better outcomes study on a larger scale, and inform surgeons on subtle facial movements that will lead to increased expressivity of emotions during a smile.


Back to 2019 Abstracts