Crowdsourcing to Assess Perceptual Speech Outcomes: A Systematic Review
Anne M. Sescleifer, BS, Caitlin A. Francoisse, MD, Alexander Y. Lin, MD, FACS.
Saint Louis University School of Medicine, St. Louis, MO, USA.
PURPOSE: Speech therapy is an integral part of the post-operative care of children with cleft palate. However, the high cost and limited accessibility of speech experts can serve as barrier to care and research for these patients. Online crowdsourcing platforms, such as Amazon Mechanical Turk, offer a new alternative to assess speech outcomes in both clinical and research settings. Crowdsourced data, defined as any study that uses lay people as raters, can allow for rapid and large-scale data collection. This systematic review examines the current literature demonstrating the use of online crowdsourcing to evaluate perceptual speech outcomes.
METHODS: Terms related to “crowdsourcing” and “speech” were searched on PubMed, Scopus, CINAHL, Cochrane CENTRAL, and PsychInfo on August 16, 2017, returning 2,812 unique articles. From 2,812 articles a preliminary title weed resulted in 140 abstracts for review. Inclusion and exclusion criteria concentrated on online crowdsourcing of perceptual speech outcomes (specifically pronunciation and articulation assessments) yielding 35 full-text articles, of which 8 articles met all inclusion criteria for this review.
RESULTS: A literature search resulted in 3,860 papers, consisting of 2,812 unique articles. Two independent raters conducted an abstract weed (IRR= 0.971, Cohen’s kappa = 0.922) and a full text weed. Eight studies consisting of 14 unique speech surveys were included. All studies used Amazon Mechanical Turk as a platform for recruiting online crowd workers, and one used an additional online crowdsourcing site (Crowdflower). Speech was provided by 370 speakers previously identified to have accented speech or a speech disorder, each producing between 1 and 367 speech samples for a total of 55,765 samples. In total, over 300,000 ratings were collected from 2203 crowd workers. Seven studies examined concordance with a gold standard or expert rating and all concluded that crowdsourcing data is highly comparable to that of the current accepted measures. Data collection time ranged from 7 to 23 hours, with worker payments ranging from $0.05 to $1.75 per task. Studies examined 3 major topics: child pronunciation of the /r/ sound, dysarthria in Parkinsonian speech, and articulation of English words produced by non-English speakers learning English as a second language.
CONCLUSION: Online crowdsourcing for perceptual speech outcomes has been shown to be highly concordant with expert opinion. Crowdsourced patient speech samples could serve as a clinical screening tool, allowing clinicians to further assess the relative need of patients and their unique problems in advance of a patient encounter. This could translate to better allocation of clinical resources, ensuring better access to care for those who need it the most. Moreover, online crowdsourcing offers a cheaper alternative to traditional speech data collection. This potential as a clinical and research assessment tool can translate to improved patient access and quality of care for these children.
Back to 2018 Posters