Associate Professor, Tsinghua University, Tsinghua University
2 papers at NeurIPS 2025
We propose MK-CAViT, a multi-kernel Vision Transformer with HGR-based correlation attention, achieving efficient multi-scale feature learning.
We propose a theoretical framework based on asymptotic analysis to determine optimal sample transfer quantities in multi-source transfer learning, yielding an efficient algorithm (OTQMS) that enhances accuracy and data efficiency.