Postdoc, Forschungszentrum Jülich
1 paper at NeurIPS 2025
We use scaling law derivation to compare open language-vision foundation models (CLIP, MaMMUT) and datasets (DataComp-1.4B, Re-LAION-1.4B, DFN-1.4B), identifying models and datasets that promise stronger scalability in the pre-training.