Researcher, SUN YAT-SEN UNIVERSITY
1 paper at NeurIPS 2025
We introduce HBLLM, a wavelet-enhanced high-fidelity $1$-bit post-training quantization method for Large Language Models (LLMs).