Predicting Recallability from Gaze Behaviour on InfoVis
Description: People have different strategies to memorise InfoVis (information visualisations). Some read the descriptive text and data alternately, while some keep focusing on specific sentences or numbers. In our VisRecall dataset, the gaze data during visual exploration is collected. We also collected the accuracy of questions after visual exploration. These question accuracies are termed as recallability scores, which stand for how recallable the answers are. Looking at it from a participant-dependent perspective, we argue that different visual exploration strategies may result in how much information people can recall correctly. We aim to figure out the correlations between visual exploration strategies and recallability scores.
The goal of this project is to develop a computational model for predicting recallability scores, given a scanpath exploring the InfoVis. For this, you should build an end-to-end neural network, taking a scanpath and an image as input, then giving recallability score(s) as output. The second step is to analyse the results of your network quantitatively and qualitatively. You may group the scanpaths into several visual exploration strategies, and analyse which strategy is the best for the memorability study. You should also compare the predicted recallability scores with the ground truth from humans.
Supervisor: Yao Wang
Distribution: 20% Literature, 50% Implementation, 30% Evaluation
Requirements: Strong programming skills, interest in eye-tracking studies, experienced with deep learning (Tensorflow/Keras, Pytorch)
Wang, Y., C. Jiao, M. Bâce, and A. Bulling. 2022. VisRecall: Quantifying Information Visualisation Recallability via Question Answering. EEE Transactions on Visualization and Computer Graphics (TVCG), p. 1-12.
Borkin Michelle A., Zoya Bylinskii, Nam Wook Kim, et al. 2015. Beyond memorability: Visualization recognition and recall. IEEE transactions on visualization and computer graphics, 22(1), p.519-528.