default search action
15th AISec@CCS 2023: Copenhagen, Denmark
- Maura Pintor, Xinyun Chen, Florian Tramèr:
Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, AISec 2023, Copenhagen, Denmark, 30 November 2023. ACM 2023 - Amol Khanna, Fred Lu, Edward Raff, Brian Testa:
Differentially Private Logistic Regression with Sparse Solutions. 1-9 - Florian A. Hölzl, Daniel Rueckert, Georgios Kaissis:
Equivariant Differentially Private Deep Learning: Why DP-SGD Needs Sparser Models. 11-22 - Tyler LeBlond, Joseph Munoz, Fred Lu, Maya Fuchs, Elliott Zaresky-Williams, Edward Raff, Brian Testa:
Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile. 23-33 - Tian Hui, Farhad Farokhi, Olga Ohrimenko:
Information Leakage from Data Updates in Machine Learning Models. 35-41 - Tomás Chobola, Dmitrii Usynin, Georgios Kaissis:
Membership Inference Attacks Against Semantic Segmentation Models. 43-53 - Reza Nasirigerdeh, Daniel Rueckert, Georgios Kaissis:
Utility-preserving Federated Learning. 55-65 - Tobias Lorenz, Marta Kwiatkowska, Mario Fritz:
Certifiers Make Neural Networks Vulnerable to Availability Attacks. 67-78 - Sahar Abdelnabi, Kai Greshake, Shailesh Mishra, Christoph Endres, Thorsten Holz, Mario Fritz:
Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection. 79-90 - Chris Hicks, Vasilios Mavroudis, Myles Foley, Thomas Davies, Kate Highnam, Tim Watson:
Canaries and Whistles: Resilient Drone Communication Networks with (or without) Deep Reinforcement Learning. 91-101 - Dudi Biton, Aditi Misra, Efrat Levy, Jaidip Kotak, Ron Bitton, Roei Schuster, Nicolas Papernot, Yuval Elovici, Ben Nassi:
The Adversarial Implications of Variable-Time Inference. 103-114 - Rajesh Kumar, Can Isik, Chilukuri Krishna Mohan:
Dictionary Attack on IMU-based Gait Authentication. 115-126 - Benoît Coqueret, Mathieu Carbone, Olivier Sentieys, Gabriel Zaid:
When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence. 127-138 - Md Asifur Rahman, Sarra M. Alqahtani:
Task-Agnostic Safety for Reinforcement Learning. 139-148 - Erik Imgrund, Tom Ganz, Martin Härterich, Lukas Pirch, Niklas Risse, Konrad Rieck:
Broken Promises: Measuring Confounding Effects in Learning-based Vulnerability Discovery. 149-160 - Luke E. Richards, Edward Raff, Cynthia Matuszek:
Measuring Equality in Machine Learning Security Defenses: A Case Study in Speech Recognition. 161-171 - Daniel Gibert, Giulio Zizzo, Quan Le:
Certified Robustness of Static Deep Learning-based Malware Detectors against Patch and Append Attacks. 173-184 - Robert J. Joyce, Tirth Patel, Charles Nicholas, Edward Raff:
AVScan2Vec: Feature Learning on Antivirus Scan Data for Production-Scale Malware Corpora. 185-196 - Theo Chow, Zeliang Kan, Lorenz Linhardt, Lorenzo Cavallaro, Daniel Arp, Fabio Pierazzi:
Drift Forensics of Malware Classifiers. 197-207 - Mario D'Onghia, Federico Di Cesare, Luigi Gallo, Michele Carminati, Mario Polino, Stefano Zanero:
Lookin' Out My Backdoor! Investigating Backdooring Attacks Against DL-driven Malware Detectors. 209-220 - Elizabeth Bates, Vasilios Mavroudis, Chris Hicks:
Reward Shaping for Happier Autonomous Cyber Security Agents. 221-232 - Biagio Montaruli, Luca Demetrio, Maura Pintor, Luca Compagna, Davide Balzarotti, Battista Biggio:
Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors. 233-244
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.