PhD student, Columbia University
2 papers at NeurIPS 2025
A two-system RNN achieves continual learning by using a 'what' system to infer compositional structures in tasks and a 'how' system to compose low-rank components, demonstrating transfer and compositional generalization.
We present a transformer brain encoder that achieves state of the art performance, by leveraging brain-region to image-feature cross-attention mechanism, efficiently mapping high-dimensional retinotopic features to brain areas.