1 paper across 1 session
We present a theoretical framework for mask-based pretraining using high-dimensional statistics and introduce R²MAE, a novel pretraining scheme that enhances self-supervised learning across diverse data domains.