


default search action
SaTML 2023: Raleigh, NC, USA
- 2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023, Raleigh, NC, USA, February 8-10, 2023. IEEE 2023, ISBN 978-1-6654-6300-3

- Stefano Calzavara, Lorenzo Cazzaro

, Claudio Lucchese, Federico Marcuzzi
:
Explainable Global Fairness Verification of Tree-Based Classifiers. 1-17 - Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala:

Exploiting Fairness to Enhance Sensitive Attributes Reconstruction. 18-41 - Krishna Acharya, Eshwar Ram Arunachaleswaran, Sampath Kannan, Aaron Roth

, Juba Ziani:
Wealth Dynamics Over Generations: Analysis and Interventions. 42-57 - Patrik Joslin Kenfack, Adín Ramírez Rivera, Adil Mehmood Khan, Manuel Mazzara:

Learning Fair Representations through Uniformly Distributed Sensitive Attributes. 58-67 - Guy Heller, Ethan Fetaya:

Can Stochastic Gradient Langevin Dynamics Provide Differential Privacy for Deep Learning? 68-106 - Reza Nasirigerdeh, Javad Torkzadehmahani, Daniel Rueckert, Georgios Kaissis:

Kernel Normalized Convolutional Networks for Privacy-Preserving Machine Learning. 107-118 - Sayanton V. Dibbo, Dae Lim Chung, Shagufta Mehnaz:

Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability. 119-135 - Valentin Hartmann, Léo Meynent, Maxime Peyrard, Dimitrios Dimitriadis, Shruti Tople, Robert West:

Distribution Inference Risks: Identifying and Mitigating Sources of Leakage. 136-149 - Anshuman Suri, Yifu Lu, Yanjin Chen, David Evans

:
Dissecting Distribution Inference. 150-164 - Sanjay Kariyappa, Moinuddin K. Qureshi:

ExPLoit: Extracting Private Labels in Split Learning. 165-175 - Harsh Chaudhari, Matthew Jagielski, Alina Oprea:

SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning. 176-196 - Huzaifa Arif, Alex Gittens, Pin-Yu Chen:

Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model Reprogramming. 197-209 - Rachel Cummings, Hadi Elzayn, Emmanouil Pountourakis, Vasilis Gkatzelis

, Juba Ziani:
Optimal Data Acquisition with Privacy-Aware Agents. 210-224 - Edoardo Debenedetti, Vikash Sehwag, Prateek Mittal:

A Light Recipe to Train Robust Vision Transformers. 225-253 - Washington Garcia, Pin-Yu Chen, Hamilton Scott Clouse, Somesh Jha, Kevin R. B. Butler:

Less is More: Dimension Reduction Finds On-Manifold Adversarial Examples in Hard-Label Attacks. 254-270 - Sanghyun Hong, Nicholas Carlini, Alexey Kurakin:

Publishing Efficient On-device Models Increases Adversarial Vulnerability. 271-290 - Xiaojun Xu, Hanzhang Wang, Alok Lal, Carl A. Gunter, Bo Li:

EDoG: Adversarial Edge Detection For Graph Neural Networks. 291-305 - Nishtha Madaan, Diptikalyan Saha, Srikanta Bedathur:

Counterfactual Sentence Generation with Plug-and-Play Perturbation. 306-315 - Minseon Kim, Jihoon Tack, Jinwoo Shin, Sung Ju Hwang:

Rethinking the Entropy of Instance in Adversarial Training. 316-326 - Fangcheng Liu, Chao Zhang, Hongyang Zhang:

Towards Transferable Unrestricted Adversarial Examples with Minimum Changes. 327-338 - Giovanni Apruzzese

, Hyrum S. Anderson, Savino Dambra, David Freeman, Fabio Pierazzi
, Kevin A. Roundy
:
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice. 339-364 - Yao Qin, Xuezhi Wang, Balaji Lakshminarayanan, Ed H. Chi, Alex Beutel:

What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel. 365-376 - Gorka Abad

, Servio Paguada, Oguzhan Ersoy, Stjepan Picek, Víctor Julio Ramírez-Durán, Aitor Urbieta:
Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated Learning. 377-391 - Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey:

Backdoor Attacks on Time Series: A Generative Approach. 392-403 - Hojjat Aghakhani, Lea Schönherr, Thorsten Eisenhofer, Dorothea Kolossa

, Thorsten Holz, Christopher Kruegel, Giovanni Vigna:
Venomave: Targeted Poisoning Against Speech Recognition. 404-417 - Patrick Altmeyer

, Giovan Angela, Aleksander Buszydlik
, Karol Dobiczek, Arie van Deursen
, Cynthia C. S. Liem
:
Endogenous Macrodynamics in Algorithmic Recourse. 418-431 - Yingyan Zeng, Jiachen T. Wang, Si Chen, Hoang Anh Just, Ran Jin, Ruoxi Jia:

ModelPred: A Framework for Predicting Trained Model from Training Data. 432-449 - Katharina Beckh, Sebastian Müller, Matthias Jakobs, Vanessa Toborek, Hanxiao Tan, Raphael Fischer, Pascal Welke

, Sebastian Houben, Laura von Rüden:
Harnessing Prior Knowledge for Explainable Machine Learning: An Overview. 450-463 - Tilman Räuker, Anson Ho, Stephen Casper, Dylan Hadfield-Menell:

Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks. 464-483 - Zayd Hammoudeh, Daniel Lowd:

Reducing Certified Regression to Certified Classification for General Poisoning Attacks. 484-523 - Florian Jaeckle, M. Pawan Kumar:

Neural Lower Bounds for Verification. 524-536 - Haoze Wu

, Teruhiro Tagomori, Alexander Robey, Fengjun Yang, Nikolai Matni, George J. Pappas
, Hamed Hassani, Corina S. Pasareanu, Clark W. Barrett
:
Toward Certified Robustness Against Real-World Distribution Shifts. 537-553 - Jiawei Zhang, Linyi Li, Ce Zhang, Bo Li:

CARE: Certifiably Robust Learning with Reasoning via Variational Inference. 554-574 - Mintong Kang, Linyi Li, Bo Li:

FaShapley: Fast and Approximated Shapley Based Model Pruning Towards Certifiably Robust DNNs. 575-592 - Toluwani Aremu

, Karthik Nandakumar:
PolyKervNets: Activation-free Neural Networks For Efficient Private Inference. 593-604 - Ari Karchmer:

Theoretical Limits of Provable Security Against Model Extraction by Efficient Observational Defenses. 605-621 - Korbinian Koch, Marcus Soll

:
No Matter How You Slice It: Machine Unlearning with SISA Comes at the Expense of Minority Classes. 622-637 - Zhifeng Kong, Kamalika Chaudhuri:

Data Redaction from Pre-trained GANs. 638-677 - Teresa Datta, Daniel Nissani, Max Cembalest, Akash Khanna, Haley Massa, John Dickerson:

Tensions Between the Proxies of Human Values in AI. 678-689 - Amanda Coston, Anna Kawakami, Haiyi Zhu

, Ken Holstein, Hoda Heidari:
A Validity Perspective on Evaluating the Justified Use of Data-driven Decision-making Algorithms. 690-704

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














