PhD student, Cornell University
2 papers at NeurIPS 2025
We introduce the first method for translating text embeddings from one vector space to another without any paired data, encoders, or predefined sets of matches.
This paper introduces PILS, a novel language model inversion method that leverages the low-dimensionality of next-token distributions, enabling their lossless compression over multiple generation steps for markedly improved prompt recovery.