6 papers across 3 sessions
FedMGP introduces a multi-group text-visual prompt paradigm for federated learning that effectively balances personalization and generalization , achieving state-of-the-art performance with minimal communication parameters.
We find that it is effective to defend "long-length" jailbreak attacks via efficient "short-length" LLM adversarial training, supporting by both theoretical and empirical evidence.
We propose CoAPT, a collaborative adversarial prompt tuning method that significantly improves the robustness and generalization of vision-language models against adversarial perturbations.