{ "cells": [ { "cell_type": "markdown", "id": "8a806e2e", "metadata": {}, "source": [ "# Project Inspiration\n", "\n", "Below, you will find some resources useful to find inspiration for your projects, divided by topics. \n", "\n", "⚠️ Note: The content on this page may be slightly biased towards the expertise of your TA, and is just indicative. Cognitive modelling is much broader, and you are welcome to explore other directions. Whether you stick to the suggestions below or [go your own way](https://youtu.be/oiosqtFLBBA?si=hxGdMeRHR1KsYG4J), please make sure to check your project plans with the TA or course coordinator.\n", "\n", "## Language\n", "\n", "Here are some recent, interesting papers about modelling human language with deep-learning models:\n", "\n", "* [_A two-dimensional space of linguistic representations shared across individuals_](https://pmc.ncbi.nlm.nih.gov/articles/PMC12139866/), Tuckute et al., 2025;\n", "* [_Distributed sensitivity to syntax and semantics throughout the language network_](https://direct.mit.edu/jocn/article/36/7/1427/120796/Distributed-Sensitivity-to-Syntax-and-Semantics), Shain et al., 2024;\n", "* [_Hierarchical dynamic coding coordinates speech comprehension in the human brain_](https://pmc.ncbi.nlm.nih.gov/articles/PMC11042271/), Gwilliams et al., 2025;\n", "* [_Large language models can segment narrative events similarly to humans_](https://link.springer.com/article/10.3758/s13428-024-02569-z), Michelmann et al., 2025.\n", "\n", "Besides these specific papers, you may be interested in the work conducted in the labs by these professors:\n", "* [Evelina Fedorenko](https://www.evlab.mit.edu/), MIT;\n", "* [Jean-Rémi King](https://kingjr.github.io/), Meta AI;\n", "* [David Poeppel](https://wp.nyu.edu/poeppellab/), NYU;\n", "* [Mariya Toneva](https://mtoneva.com/group.html), MPI Saarbrücken;\n", "* [Jack Gallant](https://gallantlab.org/), UC Berkley.\n", "\n", "\n", "## Vision\n", "\n", "Recent interesting papers:\n", "* [_A large-scale examination of inductive biases shaping high-level visual representation in brains and machines_](https://www.nature.com/articles/s41467-024-53147-y), Conwell et al., 2024;\n", "* [_Dimensions underlying the representational alignment of deep neural networks with humans_](https://www.nature.com/articles/s42256-025-01041-7), Mahner et al., 2025;\n", "* [_Brain diffusion for visual exploration: Cortical discovery using large scale generative models_](https://proceedings.neurips.cc/paper_files/paper/2023/hash/ef0c0a23a1a8219c4fc381614664df3e-Abstract-Conference.html), Luo et al., 2023;\n", "* [_Assessing the Alignment of Popular CNNs to the Brain for Valence Appraisal_](https://arxiv.org/abs/2509.21384), 2025.\n", "\n", "Influential labs:\n", "* [Nikolaus Kriegeskorte](https://kriegeskortelab.zuckermaninstitute.columbia.edu/people/nikolaus-kriegeskorte), Columbia University;\n", "* [Talia Konkle](https://konklab.fas.harvard.edu/), Harvard University;\n", "* [Leila Wehbe](http://cs.cmu.edu/~lwehbe/group.html), Carnergie Mellon;\n", "* [Martin Hebart](https://hebartlab.com/), Justus Liebig University and MPI Leipzig;\n", "* [Hans op de Beeck](https://www.hoplab.be/), KU Leuven;\n", "* [Iris Groen](https://sites.google.com/view/irisgroen), UvA.\n", "\n", "## Multimodality – Vision and Language\n", "\n", "Interesting papers:\n", "* [_Modality-Agnostic Decoding of Vision and Language from fMRI_](https://www.biorxiv.org/content/10.1101/2025.06.08.658221v2.abstract), Nikolaus et al., 2025;\n", "* [_Language models align with brain regions that represent concepts across modalities_](https://arxiv.org/abs/2508.11536), Ryskina et al., 2025;\n", "* [_Revealing Vision-Language Integration in the Brain with Multimodal Networks_](https://proceedings.mlr.press/v235/subramaniam24a.html), Subramaniam et al., 2024;\n", "* [_Better models of human high-level visual cortex emerge from natural language supervision with a large and diverse dataset_](https://www.nature.com/articles/s42256-023-00753-y).\n", "\n", "## Tips\n", "* **Don't panic if you can't test SOTA models** Unfortunately, SOTA AI models are quite computationally intensive to run, and most often don't fit with the Colab free-tier compute budget. The good news is that using large, SOTA models is not necessarily the main goal of cognitive modelling. In fact, using \"older\" models may even be a preferable option, because you can interpret results leveraging existing interpretability and cognitive-modelling studies.\n", "\n", "* **If brain responses are too complex to work with, go for behavioural data** Most of the studies mentioend above use brain data, which is extremely exciting (but also painful) to work with. If you struggle to work with brain data (too large, not preprocessed, badly documented), try to use behavoural data, such as reaction times (RTs) or similarity judgments—they are usually easier to work with and often less noisy than neural data.\n" ] } ], "metadata": { "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 5 }