Disentangled Face Embeddings
Description: Current face embedding methods like ArcFace encode the image of a face into a 512-dimensional vector called embedding. The advantage of such embeddings is that the vector similarity between them correlates with visual similarity of the corresponding faces. However, a limitation of such whole face embeddings is that the faces cannot be compared on a finer level. The goal of this project is to train a face embedding model that encodes parts of the face (eyes, nose, mouth, etc…) separately. For this, you will use a dataset consisting of faces with corresponding mask segmentations to extract the separate face parts. You will then train a neural network to embed these parts separately and analyse the model performance with a user study.
Supervisor: Florian Strohm
Distribution:
Bachelor: 40% Implementation, 25% data preparation, 35% Analysis and Evaluation.
Master: 50% Implementation, 10% data preparation, 40% Analysis and Evaluation.
Requirements: Good Python skills; experience with deep learning (master) and interest in computer vision