Associate Professor, Tel Aviv University
3 papers at NeurIPS 2025
We introduce EG-CFG, a code generation method that incorporates real-time execution feedback during LLM inference to guide line-by-line generation, significantly improving code accuracy.
We show that prior LRP-based explainability methods for Transformers overlook positional encoding, and we propose a new approach that propagates relevance across positional components, yielding substantial gains in both XAI for NLP and vision tasks.