5 papers across 3 sessions
We propose BlockDecoder, a novel ASR decoder architecture that separates textual context building from audio-text integration, achieving a ~2x speed-up over traditional decoders without performance degradation across datasets, languages and tasks.
We introduce a new SGD-based algorithm with delayed projection for training kernel machines that achieves comparable or superior performance while reducing training time from days to under an hour.
We build a multi-set function that is monotone and separable
We show how perceived post-selection bias distorts strategic effort in merit-based selection, leading to disparities. Our model quantifies interventions to reduce inequity by adjusting selectivity and perceived valuation gaps.
We design inverted indices for graph-structured objects.