Postdoc, Columbia University
3 papers at NeurIPS 2025
We meta-learn a transformer based in-context learning fMRI visual cortex encoder which can adapt to new human subjects without any fine-tuning
We built a transformer-based model to predict brain activity across the whole brain from visual input, then use this model to label the categoriacal selectivity of areas beyond the visual cortex to better understand higher-order visual processing.
We present a transformer brain encoder that achieves state of the art performance, by leveraging brain-region to image-feature cross-attention mechanism, efficiently mapping high-dimensional retinotopic features to brain areas.