2 papers across 1 session
We propose CryptoMoE, the first framework to enable private, accurate, and efficient inference for MoE-based LLMs.
We propose DictPFL, a framework that ensures efficient and private federated learning (FL) by encrypting shared gradients and keeping most gradients local, while still preserving the performance of global gradient aggregation.