Our goal is to develop next-generation human-machine interfaces that offer human-like interactive capabilities. To this end, we research fundamental computational methods as well as ambient and on-body systems to sense, model, and analyse everyday non-verbal human behavior and cognition.
Humboldt Research Fellows
We invite applications from excellent PhD graduates who are interested in doing a PostDoc in our group. We are eligible to host excellent postdoctoral researchers for up to 2 years through a prestigious Humboldt Research Fellowship. If you are interested in applying for such a fellowship with our support, please follow these instructions.
Latest News
Spotlight
- WACV'25: ActionDiffusion: An Action-aware Diffusion Model for Procedure Planning in Instructional Videos
- PACM HCI'24: Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI Research
- TVCG'24: Pose2Gaze: Eye-body Coordination during Daily Activities for Gaze Prediction from Full-body Poses
- TVCG'24: HOIMotion: Forecasting Human Motion During Human-Object Interactions Using Egocentric 3D Object Bounding Boxes
- UIST'24: DisMouse: Disentangling Information from Mouse Movement Data
- CHI'24: SalChartQA: Question-driven Saliency on Information Visualisations
- CHI'24: Mouse2Vec: Learning Reusable Semantic Representations of Mouse Behaviour
- ECCV'24: Multi-Modal Video Dialog State Tracking in the Wild
- WACV'24: VD-GR: Boosting Visual Dialog with Cascaded Spatial-Temporal Multi-Modal GRaphs
- LREC-COLING'24: OLViT: Multi-Modal State Tracking via Attention-Based Embeddings for Video-Grounded Dialog
- AAAI'24: Neural Reasoning About Agents’ Goals, Preferences, and Actions
- ACL'24: Limits of Theory of Mind Modelling in Dialogue-Based Collaborative Plan Acquisition
- ECAI'24: Explicit Modelling of Theory of Mind for Belief Prediction in Nonverbal Social Interactions
- IROS'24: GazeMotion: Gaze-guided Human Motion Forecasting
- CogSci'24: VSA4VQA: Scaling A Vector Symbolic Architecture To Visual Question Answering on Natural Images
- PG'24: GazeMoDiff: Gaze-guided Diffusion Model for Stochastic Human Motion Prediction
- CogSci'23: Improving Neural Saliency Prediction with a Cognitive Model of Human Visual Attention
- CHI'23: Impact of Privacy Protection Methods of Lifelogs on Remembered Memories
- UIST'23: SUPREYES: SUPer Resolution for EYES Using Implicit Neural Representation Learning
- UIST'23: Usable and Fast Interactive Mental Face Reconstruction
- TOCHI'22: Understanding, Addressing, and Analysing Digital Eye Strain in Virtual Reality Head-Mounted Displays
- TVCG'22: VisRecall: Quantifying Information Visualisation Recallability via Question Answering
- CHI'22: Designing for Noticeability: The Impact of Visual Importance on Desktop Notifications
- COLING'22: Neuro-Symbolic Visual Dialog
- TVCG'21: EHTask: Recognizing User Tasks from Eye and Head Movements in Immersive Virtual Reality
- TVCG'21: FixationNet: Forecasting Eye Fixations in Task-Oriented Virtual Environments
- CoNLL'21: VQA-MHUG: A gaze dataset to study multimodal neural attention in VQA
- CHI'21: A Critical Assessment of the Use of SSQ as a Measure of General Discomfort in VR Head-Mounted Displays
- ICCV'21: Neural Photofit: Gaze-based Mental Image Reconstruction