Cross-Device Typing Behaviour Generation
Description: Cross-device behaviour generation can be used for data augmentation, therefore reduces the required amount of collected data and benefits user testing. For example, Yuan et al. generated gesture data in VR that were costly to collect from the gesture data in desktop that were easier to collect [1]. In this project, you will go even beyond two devices and explore cross-three-device setting: desktop, tablet and phone (from dataset [2]).
Goal:
- Analyse if and how users type differently on these different devices
- Build generative models, such as diffusion-based models [3], to simulate realistic (both user- and device-specific) typing behaviour across devices
Supervisor: Guanhua Zhang
Distribution: 10% Literature, 10% Statistical analysis, 30% Data preparation, 50% Deep learning
Requirements: Strong deep learning skills and mathematics, practical experience in Pytorch
Literature:
[1] Yuan et al., Generating Virtual Reality Stroke Gesture Data from Out-of-Distribution Desktop Stroke Gesture Data, IEEE VR’24
[2] Belman et al., SU-AIS BB-MAS (Syracuse University and Assured Information Security - Behavioral Biometrics Multi-device and multi-Activity data from Same users) Dataset, IEEE Dataport, 2019</i>
[3] Jiao et al., DiffGaze: A Diffusion Model for Continuous Gaze Sequence Generation on 360° Images, arxiv 2024