Federated Learning for Non-IID Gaze Estimation
Description: Appearance-based gaze estimation  is a computer vision task that aims to predict either the 2D point of regard or the 3D gaze direction. Mobile devices are constantly generating massive volumes of data  which are of great value for training gaze estimation models. However, due to privacy concerns, collecting private data from mobile devices to the cloud for centralized model training impose several challenges. Besides, data samples are usually characterized by non-independent and identically distributed (non-IID) skews across different devices. Therefore, Federated Learning (FL)  has emerged as a new paradigm of distributed machine learning that orchestrates model training across devices without collecting any raw data from users.
Goal: The goal of this project is to develop a federated learning model that counterbalances the bias introduced by non-IID gaze data via 'Mixture-of-Experts with Expert Choice Routing'.
Supervisor: Mayar Elfares
Distribution: 20% Literature, 60% Implementation, 20% Analysis
Requirements: Good Python skills (Pytorch) & basic knowledge of probability
 Zhang, Xucong, Yusuke Sugano, Mario Fritz and Andreas Bulling. 2015. Appearance-based gaze estimation in the wild. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
 Zhang, Xucong, Yusuke Sugano, Mario Fritz and Andreas Bulling. 2017. It's Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
 H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Agüera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS).