Introduction

The Brain Language Model (BrainLM) is designed to decode and predict brain dynamics using self-supervised masked prediction. Developed by the van Dijk Lab at Yale University, BrainLM is trained on extensive fMRI data to enhance understanding of brain activity patterns and make predictions about future states. It is capable of tasks such as fine-tuning for clinical predictions and zero-shot inference to identify functional networks.

Architecture

BrainLM is based on a Vision Transformer Masked Autoencoder (ViTMAE) architecture. It is trained on fMRI recordings with a focus on self-supervised learning through masked prediction, which enables the model to generate interpretable representations and simulate brain responses to perturbations.

Training

  • Data Sources:

    • UK Biobank: 76,296 recordings (~6450 hours)
    • Human Connectome Project: 1002 recordings (~250 hours)
  • Preprocessing:

    • Motion Correction, Normalization, Temporal Filtering, ICA Denoising
    • Brain divided into 424 regions using the AAL-424 atlas
    • Temporal resolution ~1 Hz
  • Training Details:

    • Pretraining for 100 epochs with a batch size of 512
    • Masking ratios: 20%, 75%, and 90%
    • Objective: Minimize mean squared error between original and predicted parcels
  • Data Split:

    • 80% training, 10% validation, 10% test from UKB; HCP used for testing

Guide: Running Locally

  1. Clone the Repository:
    git clone https://github.com/vandijklab/BrainLM

  2. Install Dependencies:
    Navigate to the cloned directory and install necessary packages.

  3. Download Data:
    Obtain fMRI datasets, such as those from the UK Biobank or HCP.

  4. Preprocess Data:
    Follow preprocessing steps for fMRI data as outlined.

  5. Run Training:
    Execute training scripts provided in the repository.

  6. Cloud GPUs:
    Consider using cloud services like AWS, Google Cloud, or Azure for access to powerful GPUs.

License

The BrainLM model is released under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. This means you can use, share, and copy the material for non-commercial purposes, but you must give appropriate credit, provide a link to the license, and indicate if changes were made. You may not distribute modified material. More details can be found at Creative Commons.

More Related APIs