Prediction of Search Targets From Fixations in Open-World Settings
We recorded fixation data of 18 participants (nine male) with different nationalities and aged between 18 and 30 years. The eyesight of nine participants was impaired but corrected with contact lenses or glasses.
To record gaze data we used a stationary Tobii TX300 eye tracker that provides binocular gaze data at a sampling frequency of 300Hz. Parameters for fixation detection were left at their defaults: fixation duration was set to 60ms while the maximum time between fixations was set to 75ms. The stimuli were shown on a 30 inch screen with a resolution of 2560×1600 pixels. Participants were randomly assigned to search for targets for one of the three stimulus types.
The dataset contains three categories: “Amazon”, “O’Reilly” and “Mugshots”. For each category, there is a folder that contains 4 subfolder: search targets, Collages, Gaze data and binary mask that we used to get the position of each individual image in the collages.
* In the subfolder search targets you can find the 5 single images. The participants were looking for this image in the collages.
* In the folder Collages there are 5 subfolder. Subfolder with the same name as the search target indicate that users saw those collages for the search target. There are 20 collages per search target.
* In the folder gaze data you can find Media name, Fixation order, Fixation position on the screen, pupil size for left and right eye.
More information can be found here.
Download: Please download the full dataset here (374.9 Mb).
Contact: Andreas Bulling,
The data is only to be used for non-commercial scientific purposes. If you use this dataset in a scientific publication, please cite the following paper:
Prediction of Search Targets From Fixations in Open-world Settings
Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR),