Perceptual User Interfaces Logo
University of Stuttgart Logo

Facial Composite Generation with Iterative Human Feedback

Florian Strohm, Ekta Sood, Dominike Thomas, Mihai Bâce, Andreas Bulling

Proceedings of the NeurIPS Workshop Gaze Meets ML (GMML), pp. 1–19, 2022.

Oral presentation


Abstract

We propose the first method in which human and AI collaborate to iteratively reconstruct the human’s mental image of another person’s face only from their eye gaze. Current tools for generating digital human faces involve a tedious and time-consuming manual design process. While gaze-based mental image reconstruction represents a promising alternative, previous methods still assumed prior knowledge about the target face, thereby severely limiting their practical usefulness. The key novelty of our method is a collaborative, iterative query engine: Based on the user’s gaze behaviour in each iteration, our method predicts which images to show to the user in the next iteration. Results from two human studies (N=12 and N=22) show that our method can visually reconstruct digital faces that are more similar to the mental image, and is more usable compared to other methods. As such, our findings point at the significant potential of human-AI collaboration for reconstructing mental images, potentially also beyond faces, and of human gaze as a rich source of information and a powerful mediator in said collaboration.

Links


BibTeX

@inproceedings{strohm22_gmml, title = {Facial Composite Generation with Iterative Human Feedback}, author = {Strohm, Florian and Sood, Ekta and Thomas, Dominike and Bâce, Mihai and Bulling, Andreas}, year = {2022}, booktitle = {Proceedings of the NeurIPS Workshop Gaze Meets ML (GMML)}, doi = {}, pages = {1--19} }