Perceptual User Interfaces Logo
University of Stuttgart Logo

Image Synthesis for Appearance-based Gaze Estimation

Dataset image

Description: Appearance-based gaze estimation (Zhang et al. 2015) is a task in computer vision with the goal of predicting either the 3D gaze direction or the 2D point of regard. With recent advancements in machine learning and, in particular deep learning, recent methods have achieved state of the art performance for this task. However, such methods require large-scale labelled datasets which are difficult to collect. A promising approach is to rely on methods that can synthesise eye or face images (He at al. 2019, Yu & Odobez 2020, Zheng et al., 2020) and use these images as a data augmentation technique.

The goal of this project is to first evaluate existing methods and then propose a novel, improved method to synthesise images. Image source: He et al. 2019.

Supervisor: Mihai B√Ęce

Distribution: 30% Literature, 10% Data Preparation, 40% Implementation, 20% Analysis and Evaluation

Requirements: Computer Vision. Experience with Tensorflow/PyTorch

Literature: Zhang, Xucong, Yusuke Sugano, Mario Fritz and Andreas Bulling. 2015. Appearance-based gaze estimation in the wild. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

He, Zhe, Spurr, Adrian, Zhang, Xucong, & Hilliges, Otmar. 2019. Photo-Realistic Monocular Gaze Redirection Using Generative Adversarial Networks. Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

Yu, Yuechen and Jean-Marc Odobez. 2020. Unsupervised Representation Learning for Gaze Estimation. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

Zheng, Yufeng, Seonwook Park, Xucong Zhang, Shalini De Mello and Otmar Hilliges. 2020. Self-Learning Transformations for Improving Gaze and Head Redirection. arxiv:2010.12307.