


default search action
"Antidote: Post-fine-tuning Safety Alignment for Large Language Models ..."
Tiansheng Huang et al. (2024)
- Tiansheng Huang, Gautam Bhattacharya, Pratik Joshi, Josh Kimball, Ling Liu:
Antidote: Post-fine-tuning Safety Alignment for Large Language Models against Harmful Fine-tuning. CoRR abs/2408.09600 (2024)

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.