MPIIGaze: Appearance-Based Gaze Estimation in the Wild
We present the MPIIGaze dataset that contains 213,659 images that we collected from 15 participants during natural everyday laptop use over more than three months. The number of images collected by each participant varied from 34,745 to 1,498. Our dataset is significantly more variable than existing ones with respect to appearance and illumination.
The dataset contains three parts: “Data”, “Evaluation Subset” and “Annotation subset”.
* The “Data” includes “Original”, “Normalized” and “Calibration” for all the 15 participants.
* The “Evaluation Subset” contains the image list that indicates the selected samples for the evaluation subset in our paper.
* The “Annotation Subset” contains the image list that indicates 10,848 samples that we manually annotated, following the annotations with (x, y) position of 6 facial landmarks (four eye corners, two mouth corners) and (x,y) position of two pupil centers for each of above images.
More information can be found here.
Download: Please download the full dataset here (2.1 Gb).
Contact: Andreas Bulling,
The data is only to be used for non-commercial scientific purposes. If you use this dataset in a scientific publication, please cite the following paper:
MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 41(1),
Appearance-based Gaze Estimation in the Wild
Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),