Encoding and Decoding Language in the Brain with Language Models

🧠 Brain-LM Alignment

Encoding and Decoding Language in the Brain with Language Models

March 29, 2026 | EACL 2026 | Rabat, Morocco

Overview

This EACL 2026 tutorial will cover the foundations of brain–language model alignment and will then explore recent advances on scaling laws of language models for brain alignment, multilingual brain encoding, recent developments in fine-tuning language models with brain data, and advances in brain decoding using language models, including semantic reconstruction of continuous language from brain data. Participants will gain an overview of current naturalistic datasets, computational frameworks, and methods driving the emerging field of NeuroAI. ## Learning Objectives

  • Understand the fundamental concepts of brain-AI alignment and encoding models
  • Learn methodologies for comparing brain activity with model representations
  • Gain insights into multilingual processing in both human brains and language models
  • Master techniques for brain-informed model fine-tuning and evaluation
  • Discover practical applications and future research directions in Neuro-AI

Target Audience

AI/ML Researchers Neuroscience Students NLP Practitioners Computer Vision Researchers Cognitive Scientists PhD Students Industry Professionals
This tutorial is designed for researchers and practitioners with: - Basic understanding of deep learning and neural networks - Familiarity with transformer-based language models - Interest in interdisciplinary approaches to AI Our material assumes a basic knowledge of machine learning (e.g., foundations of linear models, regression, basic probability, and algebra) and deep learning, including familiarity with concepts such as word embeddings, Transformer models, and backpropagation. Some exposure to cognitive science or neuroscience, as well as programming experience in Python with libraries like PyTorch or scikit-learn, will also be helpful, though not strictly required.

Tutorial Schedule

Time Topic Speaker
9:00 - 9:45 Introduction to Brain Encoding and Decoding
• Brain encoding modeling framework
• Experimental design: Naturalistic language experiments • Feature spaces for language experiments • Model fitting and Visualization on brain flatmaps
Christine Tseng
9:45 - 10:30 Representations from LLMs for brain encoding models
• Overview of contextual embeddings from LLMs
• Contextual embeddings for brain encoding models
• Analysis of the architectural features of LLMs which predict brain activtiy
• Embeddings from multilingual LLMs
Mathis Lamarre
10:30 - 11:00 Coffee Break & Networking -
11:00 - 11:40 Fine-tuning and scaling laws for language models with brain data
• Fine-tuning with monolingual brain data
• Fine-tuning with Bilingual brain data
• Scaling laws for language models with brain data
Anuja Negi
11:40 - 12:20 Linguistic Brain Decoding
• Semantic reconstruction of continuous language from non-invasive brain recordings
• Auditory reconstruction from non-invasive brain
Subba Reddy Oota

Instructors

Anuja Negi

Anuja Negi

PhD candidate · TU Berlin

Website · Scholar · Twitter

Anuja Negi is a PhD candidate at the Cognitive Computing in Biological and Artificial Systems Lab at TU Berlin, Germany is affiliated with the Bernstein Center for Computational Neuroscience (BCCN) Berlin. She received her Master’s degree in Computational Neuroscience jointly from Technische Universität Berlin and Humboldt-Universität zu Berlin. Her research focuses on language comprehension and learning, bridging neuroscience and artificial intelligence, using insights from one to better understand and improve the other. Her work has appeared at Conference on Neural Information Processing Systems (NeurIPS), Society for Neuroscience (SfN), Cognitive Computational Neuroscience (CCN), the Society for the Neurobiology of Language (SNL), Organization for Human Brain Mapping (OHBM), and NeuroImage journal.

Mathis Lamarre

Mathis Lamarre

PhD candidate · TU Berlin

Website · Scholar

Mathis Lamarre is a PhD candidate at the Cognitive Computing in Biological and Artificial Systems Lab at TU Berlin, Germany is affiliated with the Bernstein Center for Computational Neuroscience (BCCN) Berlin. He received his Master’s degree in Computational Science and Engineering from ETH Zürich, and his Bachelor’s degree in Life Sciences and Technology from EPF Lausanne. He is interested in using tools from Natural Language Processing to better understand language processing in the brain, in particular across languages. His work has appeared at the Organization for Human Brain Mapping (OHBM), Cognitive Computational Neuroscience (CCN) and Empirical Methods in Natural Language Processing (EMNLP).

Dr. Christine Tseng

Dr. Christine Tseng

Postdoctoral Researcher · TU Berlin

Website · Scholar

Dr. Christine Tseng is a Postdoctoral Researcher at TU Berlin, Germany, and completed their Ph.D. at Helen Wills Neuroscience Institute at UC Berkeley, USA. Their research interests include language processing in the brain, social neuroscience, brain encoding models, and using AI/ML to better understand the brain. Their work has appeared at CogSci, Society for Neuroscience (SfN), Cognitive Computational Neuroscience (CCN), the Society for the Neurobiology of Language (SNL), and in the Journal of Neuroscience.

Dr. Subba Reddy Oota

Dr. Subba Reddy Oota

Postdoctoral Researcher · TU Berlin

Website · Scholar · Twitter

Dr. Subba Reddy Oota is a Postdoctoral Researcher at TU Berlin, Germany, and completed his Ph.D. at Inria Bordeaux, France. His research focuses on language analysis in the brain, brain encoding/decoding, multimodal information processing, and interpreting AI models. He has published at top venues including NeurIPS, ICLR, ACL, EMNLP, NAACL, INTERSPEECH, and TMLR.

Tutorial Materials

📚 Resources

All tutorial materials will be made available before the conference and will remain accessible afterwards.

Key Papers & References

This tutorial draws from cutting-edge research including:

Brain Encoding & Alignment

  • - Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain) NeurIPS 2019
    Toneva & Wehbe 2019. [PDF]
  • - The neural architecture of language: Integrative modeling converges on predictive processing PNAS 2021
    Schrimpf et al. 2021 [PDF]
  • - Language processing in brains and deep neural networks: computational convergence and its limits Nature Communications Biology 2022
    Caucheteux & Jean-Rémi 2022. [PDF]
  • - Scaling laws for language encoding models in fMRI NeurIPS 2023
    Antonello et al. 2023 [PDF] [code]
  • - Joint processing of linguistic properties in brains and language models NeurIPS 2023
    Oota et al. 2023 [PDF]
  • - Speech language models lack important brain-relevant semantics ACL 2024
    Oota et al. 2024 [PDF]
  • - Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey) TMLR 2025
    Oota et al. 2025 [PDF]

Brain-Informed Models

  • - Brain-Informed Fine-Tuning for Improved Multilingual Understanding in Language Models NeurIPS 2025
    Negi et al.[PDF]
  • - Improving Semantic Understanding in Speech Language Models via Brain-tuning ICLR 2025
    Moussa et al.[PDF]
  • - BrainWavLM: Fine-tuning Speech Representations with Brain Responses to Language Arxiv 2025
    Vattikonda et al.[PDF]

Applications & Impact

Research Applications

  • Developing more human-like AI systems
  • Understanding model interpretability through neuroscience
  • Designing better evaluation metrics
  • Cross-modal learning and transfer

Industry Applications

  • Enhanced multilingual systems for global markets
  • Improved human-AI interaction
  • Brain-inspired architectures for edge devices
  • Cognitive load optimization in interfaces

Prerequisites

Technical Background:

  • Familiarity with basic algebra
  • Familiarity with PyTorch or TensorFlow
  • Basic knowledge of transformers (BERT, GPT, etc.)
  • Understanding of basic machine learning concepts

No Prior Neuroscience Background Required!

  • All computational cognitive neuroscience concepts will be introduced from scratch
  • Intuitive explanations with visual brainmaps
  • Focus on the bridging between two systems

Registration & Contact

Tutorial Registration: Registration will be through the main EACL-2026 conference website.

Questions? Contact the instructor:

Stay Updated

Follow for updates on:

  • Tutorial materials release
  • Additional resources
  • Post-tutorial discussions

🎯 Why Attend This Tutorial?

  • Cutting-Edge Research: Learn about the latest advances in brain-AI alignment from 2023-2025
  • Interdisciplinary Insights: Bridge neuroscience and AI for novel research directions
  • Networking: Connect with researchers at the forefront of Neuro-AI
  • Career Development: Open doors to emerging opportunities in brain-inspired AI
---

Looking forward to seeing you at EACL 2026! 🧠🤖