Predicting Neurological Deficits with Gaze
Description: Gaze patterns in individuals with Autism Spectrum Disorder (ASD) have been heavily researched as atypical eye gaze movements are part of the diagnosis signs. For example, Yaneva et al. (2015) built an ASD corpus extracting gaze data while individuals with ASD performed web-related tasks; the focus on this research was to enhance web accessibility for people with ASD and to work towards ASD detection leveraging gaze information. Regneri et al. (2016) leveraged gaze data for discourse analysis on individuals with and without ASD, their main objective being to evaluate text cohesion between varied groups. Leveraging information from gaze patterns has assisted researchers in further understanding the underlying aspects of language comprehension and production for individuals with neurological deficits such as ASD.
In this project, we aim to utilize machine learning approaches to build on such previous work from psychologists and neuroscientists. Due to the high variability and lack of resources in such a domain, the main objective is to continue building the path towards assistive interactive technologies for individuals with ASD. The task is to use gaze data to classify ASD; the research question is, can we classify individuals generally according to their gaze patterns. As eye tracking data can be sparse and labor intensive to retrieve, particularly for novel groups, we propose to extract the data using OpenFace (Amos et al., 2016). The data can be crawled from, for example, Youtube videos of people with ASD, specifically extracting varied group ages such that we can bin subtypes. The goal here would be to classify ASD with gaze information using a ML approach, and then to further conduct in-depth analysis on variations of gaze patterns specific to variables such as age.
Supervisor: Ekta Sood
Distribution: 20% Literature, 30% Data Collection, 30% Implementation, 20% Data Analysis and Evaluation.
Requirements: Interest in cognitive science (particularly with the human visual perception) and assistive technologies, familiarity with data processing and analysis/statistics, experience with machine learning. It would also be helpful to have exposure to the following frameworks — Tensorflow/PyTorch/Keras.
Literature: Brandon Amos, Bartosz Ludwiczuk, and Mahadev Satyanarayanan. 2016. Openface: A general-purpose face recognition library with mobile applications. CMU School of Computer Science, Technical Report CMU-CS-16-118.
Michaela Regneri and Diane King. 2016. Automated discourse analysis of narrations by adolescents with autistic spectrum disorder. Proceedings of the 7th Workshop on Cognitive Aspects of Computational Language Learning (CogACLL).
Victoria Yaneva, Irina Temnikova, and Ruslan Mitkov. 2015. Accessible texts for autism: An eye-tracking study. Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS).