Project Inspiration
Below, you will find some resources useful to find inspiration for your projects, divided by topics.
⚠️ Note: The content on this page may be slightly biased towards the expertise of your TA, and is just indicative. Cognitive modelling is much broader, and you are welcome to explore other directions. Whether you stick to the suggestions below or go your own way, please make sure to check your project plans with the TA or course coordinator.
Language
Here are some recent, interesting papers about modelling human language with deep-learning models:
A two-dimensional space of linguistic representations shared across individuals, Tuckute et al., 2025;
Distributed sensitivity to syntax and semantics throughout the language network, Shain et al., 2024;
Hierarchical dynamic coding coordinates speech comprehension in the human brain, Gwilliams et al., 2025;
Large language models can segment narrative events similarly to humans, Michelmann et al., 2025.
Besides these specific papers, you may be interested in the work conducted in the labs by these professors: * Evelina Fedorenko, MIT; * Jean-Rémi King, Meta AI; * David Poeppel, NYU; * Mariya Toneva, MPI Saarbrücken; * Jack Gallant, UC Berkley.
Vision
Recent interesting papers: * A large-scale examination of inductive biases shaping high-level visual representation in brains and machines, Conwell et al., 2024; * Dimensions underlying the representational alignment of deep neural networks with humans, Mahner et al., 2025; * Brain diffusion for visual exploration: Cortical discovery using large scale generative models, Luo et al., 2023; * Assessing the Alignment of Popular CNNs to the Brain for Valence Appraisal, 2025.
Influential labs: * Nikolaus Kriegeskorte, Columbia University; * Talia Konkle, Harvard University; * Leila Wehbe, Carnergie Mellon; * Martin Hebart, Justus Liebig University and MPI Leipzig; * Hans op de Beeck, KU Leuven; * Iris Groen, UvA.
Multimodality – Vision and Language
Interesting papers: * Modality-Agnostic Decoding of Vision and Language from fMRI, Nikolaus et al., 2025; * Language models align with brain regions that represent concepts across modalities, Ryskina et al., 2025; * Revealing Vision-Language Integration in the Brain with Multimodal Networks, Subramaniam et al., 2024; * Better models of human high-level visual cortex emerge from natural language supervision with a large and diverse dataset.
Tips
Don’t panic if you can’t test SOTA models Unfortunately, SOTA AI models are quite computationally intensive to run, and most often don’t fit with the Colab free-tier compute budget. The good news is that using large, SOTA models is not necessarily the main goal of cognitive modelling. In fact, using “older” models may even be a preferable option, because you can interpret results leveraging existing interpretability and cognitive-modelling studies.
If brain responses are too complex to work with, go for behavioural data Most of the studies mentioend above use brain data, which is extremely exciting (but also painful) to work with. If you struggle to work with brain data (too large, not preprocessed, badly documented), try to use behavoural data, such as reaction times (RTs) or similarity judgments—they are usually easier to work with and often less noisy than neural data.