?
today
local_bar
search
batch size
1 paper across 1 session
Poster Session 3
1 paper
Thursday, December 4, 2025 · 11:00 AM → 2:00 PM
Exhibit Hall C,D,E
Small Batch Size Training for Language Models: When Vanilla SGD Works, and Why Gradient Accumulation is Wasteful
star
#908
·
Martin Marek, Sanae Lotfi, Aditya Somasundaram, Andrew Wilson, Micah Goldblum