Stop the war!
Остановите войну!
for scientists:
default search action
"LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples."
Jia-Yu Yao et al. (2023)
- Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Munan Ning, Li Yuan:
LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples. CoRR abs/2310.01469 (2023)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.