Perceptual User Interfaces Logo
University of Stuttgart Logo

Neuroscience Inspired Computer Vision


Description: Artificial computational systems, leveraging physiological data, rival human performance in certain tasks. However, there is still a large gap between human and machine perception and understanding and thus motivation to bridge the gap.

On one hand, machines require large amounts of training data, generally trained on closed-world settings, and therefore there is a lack of out-of-domain generalization. Alternatively, humans can learn complex visual object categories from a fleeting number of examples. Models, such as ImageNet challenge winner (Krishevsky et al., 2012) trained on 1 million examples, is similar to the number of visual fixations a human makes in a year (approx. three saccades per second, when awake).

The nature of representations in biological systems compared to deep neural networks is quantitatively and qualitatively different. There is much more that neuro-science can teach deep learning! Thus, we propose to train a mapping of physiological feature vectors (i.e cortical activity) to DNN feature vectors, where the goal is to disentangle the information content.

In the scope of this project, the student will work on the task of image reconstruction using fMRI data. The student first re-implements the model from Beliy et al. 2019 as the baseline model. Then, they will experiment with using versus not using pre-trained networks, extracting features at different stages, etc. Subsequently, the student will implement their own novel system, such as using the curiosity driven reinforcement learning approach (Pathak et al. 2017) with episodic memory. The idea is to further understand what kind of information content is held within the fMRI data. Therefore, students will implement the model to reconstruct images using fMRI data, as well as exploring methods to incorporate additional physiological data (such as EEG or eye tracking data).

Supervisor: Ekta Sood and Florian Strohm

Distribution: 20% Literature, 15% Data Processing, 40% Implementation + Experiments, 25% Data Analysis and Evaluation

Requirements: Interest in computer vision and cognitive modeling, familiarity with data processing and analysis/statistics, experience with machine learning and it is helpful to have exposure to at least one of following frameworks — Tensorflow/PyTorch/Keras.

Literature: Roman Beliy, Guy Gaziv, Assaf Hoogi, Francesca Strappini, Tal Golan, and Michal Irani. 2019. From voxels to pixels and back: Self-supervision in natural image reconstruction from fMRI. Advances in Neural Information Processing Systems (NeurIPS).

Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems (NeurIPS).

Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. 2017. Curiosity-driven exploration by self-supervised prediction. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR).