Perceptual User Interfaces Logo
University of Stuttgart Logo

Learning Representations of Screentouch Behaviour

Dataset image

Description: Learning generalised representations of interactive behaviour not only reduces the laborious work of designing different features or models for each task, but also improves the data efficiency. Many methods can be used to learn such representations in a self-supervised and robustly such as contrastive learning and (masked) autoencoders. They have achieved groundbreaking success in natural language processing and computer vision but are still under-explored in HCI research field.

Goal: Apply self-supervised representation learning methods on screentouch behavioural data to learn representations; transfer the representations on other datasets and evaluate on different downstream tasks, such as interactive task recognition, next activity prediction, and emotion recognition.

Supervisor: Guanhua Zhang

Distribution: 20% Literature, 20% Data preparation, 40% Deep learning, 20% Analysis and discussion

Requirements: Strong programming skills, Experience in deep learning, Familiar with PyTorch.

Literature:

[1] Eldele et al. 2021. Time-Series Representation Learning via Temporal and Contextual Contrasting. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21.

[2] Yue et al. 2022. TS2Vec: Towards Universal Representation of Time Series. AAAI'22.

[3] Wang et al. 2022. Contrastive learning with stronger augmentations. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI).