3 papers across 3 sessions
A comprehensive empirical study on how coreset selection methods impact bias and group robustness of downstream models.
This study introduces a novel causality-driven robust optimization approach that selectively updates model components sensitive to causal reasoning, enhancing model causality while preserving valuable pretrained knowledge to mitigate overfitting.
We propose Learning to Focus (LeaF), which identifies and masks confounding tokens via gradient‐based comparisons, thereby improving long‐context reasoning accuracy and interpretability in large language models.