Meta-learning enables brain decoding models to generalize to new subjects with minimal examples, eliminating the need for expensive per-subject training while maintaining accuracy across different brains and scanners.
This paper presents a meta-learning approach that decodes visual information from brain scans (fMRI) without requiring subject-specific training. By conditioning on just a few examples of a new person's brain activity, the model learns their unique neural patterns and can decode what they're seeing—all without fine-tuning or anatomical alignment.