Perceptual User Interfaces Logo
University of Stuttgart Logo

Towards Multi-Modal Context Recognition for Hearing Instruments by Analysing Eye and Head Movements

Bernd Tessendorf, Andreas Bulling, Daniel Roggen, Thomas Stiefmeier, Gerhard Tröster, Manuela Feilner, Peter Derleth

Proc. IEEE International Symposium on Wearable Computers (ISWC), pp. 1-2, 2010.




Abstract

Current hearing instruments (HI) only rely on auditory scene analysis to adapt to the situation of the user. It is for this reason that these systems are limited in the number and type of situations they can detect. We investigate how context information derived from eye and head movements can be used to resolve such situations. We focus on two example problems that are challenging for current HIs: To distinguish concentrated from interaction, and to detect whether a person is walking alone or walking while having a conversation. We collect an eleven participant (6 male, 5 female, age 24-59) dataset that covers different typical office activities. Using person-independent training and isolated recognition we achieve an average precision of 71.7% (recall: 70.1%) for recognising concentrated work and 57.2% precision (recall: 81.3%) for detecting walking while conversing.

Links


BibTeX

@inproceedings{tessendorf10_iswc, author = {Tessendorf, Bernd and Bulling, Andreas and Roggen, Daniel and Stiefmeier, Thomas and Tr{\"{o}}ster, Gerhard and Feilner, Manuela and Derleth, Peter}, title = {Towards Multi-Modal Context Recognition for Hearing Instruments by Analysing Eye and Head Movements}, booktitle = {Proc. IEEE International Symposium on Wearable Computers (ISWC)}, year = {2010}, pages = {1-2}, doi = {10.1109/ISWC.2010.5665855} }