4 papers across 3 sessions
We evaluate Rescaled Influence Functions (RIF), a fast and accurate alternative to traditional influence functions for data attribution, particularly effective in high-dimensional settings where standard influence methods fail.
We revisit the texture bias hypothesis in CNNs by proposing a domain-agnostic suppression protocol, finding that contrary to prior claims, CNNs primarily rely on local shape instead of texture features.
We propose Learning to Focus (LeaF), which identifies and masks confounding tokens via gradient‐based comparisons, thereby improving long‐context reasoning accuracy and interpretability in large language models.