


default search action
SaTML 2024: Toronto, ON, Canada
- IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2024, Toronto, ON, Canada, April 9-11, 2024. IEEE 2024, ISBN 979-8-3503-4950-4

- Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala:

Probabilistic Dataset Reconstruction from Interpretable Models. 1-17 - Zhangheng Li, Junyuan Hong, Bo Li, Zhangyang Wang:

Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk. 18-32 - Shuai Tang, Sergül Aydöre, Michael Kearns, Saeyoung Rho, Aaron Roth, Yichen Wang, Yu-Xiang Wang, Zhiwei Steven Wu:

Improved Differentially Private Regression via Gradient Boosting. 33-56 - Amol Khanna, Edward Raff, Nathan Inkawhich:

SoK: A Review of Differentially Private Linear Models For High-Dimensional Data. 57-77 - Achraf Azize, Debabrota Basu:

Concentrated Differential Privacy for Bandits. 78-109 - Francesco Pinto

, Yaxi Hu, Fanny Yang, Amartya Sanyal:
PILLAR: How to make semi-private learning more effective. 110-139 - Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith:

Fair Federated Learning via Bounded Group Loss. 140-160 - Hadi Elzayn, Emily Black, Patrick Vossler, Nathanael Jo, Jacob Goldin, Daniel E. Ho

:
Estimating and Implementing Conventional Fairness Metrics With Probabilistic Protected Features. 161-193 - Lukas Fluri, Daniel Paleka, Florian Tramèr:

Evaluating Superhuman Models with Consistency Checks. 194-232 - Chenxi Yang, Greg Anderson, Swarat Chaudhuri:

Certifiably Robust Reinforcement Learning through Model-Based Abstract Interpretation. 233-251 - Ashutosh Nirala, Ameya Joshi, Soumik Sarkar, Chinmay Hegde

:
Fast Certification of Vision-Language Models Using Incremental Randomized Smoothing. 252-271 - Ruinan Jin, Chun-Yin Huang, Chenyu You, Xiaoxiao Li:

Backdoor Attack on Unpaired Medical Image-Text Foundation Models: A Pilot Study on MedCLIP. 272-285 - Quentin Le Roux, Kassem Kallas

, Teddy Furon:
REStore: Exploring a Black-Box Defense against DNN Backdoors using Rare Event Simulation. 286-308 - Hiroya Kato, Kento Hasegawa, Seira Hidano, Kazuhide Fukushima:

EdgePruner: Poisoned Edge Pruning in Graph Contrastive Learning. 309-326 - Yiwei Lu, Matthew Y. R. Yang, Gautam Kamath

, Yaoliang Yu:
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors. 327-343 - Eleanor Clifford, Ilia Shumailov, Yiren Zhao, Ross J. Anderson, Robert D. Mullins

:
ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks. 344-357 - Hadi M. Dolatabadi, Sarah M. Erfani, Christopher Leckie:

The Devil's Advocate: Shattering the Illusion of Unexploitable Data using Diffusion Models. 358-386 - Fnu Suya, Anshuman Suri, Tingwei Zhang, Jingtao Hong, Yuan Tian, David Evans

:
SoK: Pitfalls in Evaluating Black-Box Attacks. 387-407 - Edoardo Debenedetti, Nicholas Carlini, Florian Tramèr:

Evading Black-box Classifiers Without Breaking Eggs. 408-424 - Francesco Croce, Matthias Hein:

Segment (Almost) Nothing: Prompt-Agnostic Adversarial Attacks on Segmentation Models. 425-442 - Chulin Xie, Pin-Yu Chen, Qinbin Li, Arash Nourian, Ce Zhang, Bo Li:

Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM. 443-471 - Tatsuki Koga, Kamalika Chaudhuri, David Page:

Differentially Private Multi-Site Treatment Effect Estimation. 472-489 - Yaniv Ben-Itzhak, Helen Möllering, Benny Pinkas, Thomas Schneider

, Ajith Suresh
, Oleksandr Tkachenko, Shay Vargaftik, Christian Weinert, Hossein Yalame, Avishay Yanai:
ScionFL: Efficient and Robust Secure Quantized Aggregation. 490-511 - Karan N. Chadha, Junye Chen, John C. Duchi, Vitaly Feldman, Hanieh Hashemi, Omid Javidbakht, Audra McMillan, Kunal Talwar:

Differentially Private Heavy Hitter Detection using Federated Analytics. 512-533 - Ivoline C. Ngong, Nicholas Gibson, Joseph P. Near:

Olympia: A Simulation Framework for Evaluating the Concrete Scalability of Secure Aggregation Protocols. 534-551 - Andrew Geng, Pin-Yu Chen:

Model Reprogramming Outperforms Fine-tuning on Out-of-distribution Data in Text-Image Encoders. 552-568 - Zhifeng Kong, Kamalika Chaudhuri:

Data Redaction from Conditional Generative Models. 569-591 - Wenxin Ding, Arjun Nitin Bhagoji, Ben Y. Zhao, Haitao Zheng:

Towards Scalable and Robust Model Versioning. 592-611 - Abeba Birhane, Ryan Steed, Victor Ojewale

, Briana Vecchione, Inioluwa Deborah Raji:
AI auditing: The Broken Bus on the Road to AI Accountability. 612-643 - Augustin Godinot

, Erwan Le Merrer, Gilles Trédan, Camilla Penzo, François Taïani:
Under manipulations, are some AI models harder to audit? 644-664 - Theodora Worledge, Judy Hanwen Shen, Nicole Meister, Caleb Winston, Carlos Guestrin:

Unifying Corroborative and Contributive Attributions in Large Language Models. 665-683 - Hossein Hajipour, Keno Hassler, Thorsten Holz, Lea Schönherr, Mario Fritz:

CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models. 684-709 - Nishtha Madaan, Srikanta Bedathur:

Navigating the Structured What-If Spaces: Counterfactual Generation via Structured Diffusion. 710-722 - Kamala Varma, Arda Numanoglu, Yigitcan Kaya, Tudor Dumitras:

Understanding, Uncovering, and Mitigating the Causes of Inference Slowdown for Language Models. 723-740

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














