1 paper across 1 session
We propose a confidence-separation self-distillation strategy and train SNNs using rate coding, achieving excellent performance with minimal training overhead.