Assistant Professor, University of Hong Kong
4 papers at NeurIPS 2025
Our approach improves Vision Transformer Dense Representations via Self-Distillation
We meta-learn a transformer based in-context learning fMRI visual cortex encoder which can adapt to new human subjects without any fine-tuning
We built a transformer-based model to predict brain activity across the whole brain from visual input, then use this model to label the categoriacal selectivity of areas beyond the visual cortex to better understand higher-order visual processing.