Gaze Uncertainty Prediction of InfoVis
Description: There is always errors in Eye-tracking data, which is dependent on the eye-tracker itself, and the quality of calibration(see Figure 1). In previous work(see Figure 2), we proposed two effective metrics, the Flipping Candidate Rate (FCR) and Hit Any AOI Rate (HAAR), to quantify the impact of uncertainty on information visualisations. Once the raw gaze data are not available, can we have a rough idea of how large the gaze uncertainty (estimation error) is, and be able to predict and even correct it?
Goal:
* Process the eye-tracking data of the VisRecall dataset [1]
* Follow the previous paper [2] to analysis the correlation of calibration error and the proposed metrics
* Research questions:
- Is the gaze uncertainty (estimation error) predictable from post-hoc analysis?
- Can we automatically correct the gaze data if the calibration error is too high?
Supervisor: Yao Wang
Distribution: 30% Literature, 40% Implementation & Experiment, 30% Evaluation & Analysis
Requirements:
* Good programming skill in Python, better to be experienced in deep learning
* Basic knowledge of data analysis
* Keen interest in research (lead to potential publication)
Literature: [1] Wang, Y., C. Jiao, M. Bâce, and A. Bulling. 2022. VisRecall: Quantifying Information Visualisation Recallability via Question Answering. EEE Transactions on Visualization and Computer Graphics (TVCG), p. 1-12.
[2] Wang, Y., M. Koch, M. Bâce, D. Weiskopf, and A. Bulling. 2022. Impact of gaze uncertainty on AOIs in information visualisations. 2022 Symposium on Eye Tracking Research and Applications (ETRA), p.1–6.