6 papers across 3 sessions
WhAM: a transformer model unifying generation, acoustic translation and classification of sperm whale vocalizations
We present conditions that preclude the existence of tight generalization bounds versus a stability condition that guarantees this.
Underwater refractive distortion removal via unsupervised learning of the gradient of the water surface.
We provide a set of findings on how to adapt large language models efficiently without fine-tuning.
We study fair classification when multiple classifiers compete, and show that even if individual classifiers are fair the outcome may not be.