4 papers across 3 sessions
We generalize CLIP training to worldwide web-scale, with +0.8% better than English only counterpart on zero-shot ImageNet classification (no compromise), SoTA on zero-shot multilingual: 57.4% on CVQA and 50.2% on Babel-ImageNet.
We propose a new framework and set of evaluation criteria to assess the utility of text embeddings used in data selection for pretraining langauge models.