2 papers across 1 session
Our novel method improves visual prompting accuracy through affine/color transformations and TrivialAugment data augmentation, achieving state-of-the-art results with minimal overhead.
Influence Distillation is a mathematically justified data selection method for LLM fine-tuning that assigns optimal weights to training samples, achieving performance on par with or better than state-of-the-art while being substantially faster.