default search action
Varun Chandrasekaran
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c20]Qilong Wu, Varun Chandrasekaran:
Bypassing LLM Watermarks with Color-Aware Substitutions. ACL (1) 2024: 8549-8581 - [c19]Rishabh Adiga, Lakshmi Subramanian, Varun Chandrasekaran:
Designing Informative Metrics for Few-Shot Example Selection. ACL (Findings) 2024: 10127-10135 - [c18]Marah I Abdin, Suriya Gunasekar, Varun Chandrasekaran, Jerry Li, Mert Yüksekgönül, Rahee Ghosh Peshawaria, Ranjita Naik, Besmira Nushi:
KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval. ICLR 2024 - [c17]Erik Jones, Hamid Palangi, Clarisse Simões, Varun Chandrasekaran, Subhabrata Mukherjee, Arindam Mitra, Ahmed Hassan Awadallah, Ece Kamar:
Teaching Language Models to Hallucinate Less with Synthetic Tasks. ICLR 2024 - [c16]Fan Wu, Huseyin A. Inan, Arturs Backurs, Varun Chandrasekaran, Janardhan Kulkarni, Robert Sim:
Privately Aligning Language Models with Reinforcement Learning. ICLR 2024 - [c15]Mert Yüksekgönül, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi, Ece Kamar, Besmira Nushi:
Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models. ICLR 2024 - [i29]Rishabh Adiga, Lakshminarayanan Subramanian, Varun Chandrasekaran:
Designing Informative Metrics for Few-Shot Example Selection. CoRR abs/2403.03861 (2024) - [i28]Qilong Wu, Varun Chandrasekaran:
Bypassing LLM Watermarks with Color-Aware Substitutions. CoRR abs/2403.14719 (2024) - [i27]Fan Wu, Emily Black, Varun Chandrasekaran:
Generative Monoculture in Large Language Models. CoRR abs/2407.02209 (2024) - 2023
- [c14]Congyu Fang, Hengrui Jia, Anvith Thudi, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Varun Chandrasekaran, Nicolas Papernot:
Proof-of-Learning is Currently More Broken Than You Think. EuroS&P 2023: 797-816 - [i26]Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro, Yi Zhang:
Sparks of Artificial General Intelligence: Early experiments with GPT-4. CoRR abs/2303.12712 (2023) - [i25]Mert Yüksekgönül, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi, Ece Kamar, Besmira Nushi:
Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models. CoRR abs/2309.15098 (2023) - [i24]Erik Jones, Hamid Palangi, Clarisse Simões, Varun Chandrasekaran, Subhabrata Mukherjee, Arindam Mitra, Ahmed Awadallah, Ece Kamar:
Teaching Language Models to Hallucinate Less with Synthetic Tasks. CoRR abs/2310.06827 (2023) - [i23]Ranjita Naik, Varun Chandrasekaran, Mert Yüksekgönül, Hamid Palangi, Besmira Nushi:
Diversity of Thought Improves Reasoning Abilities of Large Language Models. CoRR abs/2310.07088 (2023) - [i22]Jihye Choi, Shruti Tople, Varun Chandrasekaran, Somesh Jha:
Why Train More? Effective and Efficient Membership Inference via Memorization. CoRR abs/2310.08015 (2023) - [i21]Marah I Abdin, Suriya Gunasekar, Varun Chandrasekaran, Jerry Li, Mert Yüksekgönül, Rahee Ghosh Peshawaria, Ranjita Naik, Besmira Nushi:
KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval. CoRR abs/2310.15511 (2023) - [i20]Fan Wu, Huseyin A. Inan, Arturs Backurs, Varun Chandrasekaran, Janardhan Kulkarni, Robert Sim:
Privately Aligning Language Models with Reinforcement Learning. CoRR abs/2310.16960 (2023) - 2022
- [c13]Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, Nicolas Papernot:
Unrolling SGD: Understanding Factors Influencing Machine Unlearning. EuroS&P 2022: 303-319 - [c12]Brian Tang, Dakota Sullivan, Bengisu Cagiltay, Varun Chandrasekaran, Kassem Fawaz, Bilge Mutlu:
CONFIDANT: A Privacy Controller for Social Robots. HRI 2022: 205-214 - [i19]Brian Tang, Dakota Sullivan, Bengisu Cagiltay, Varun Chandrasekaran, Kassem Fawaz, Bilge Mutlu:
CONFIDANT: A Privacy Controller for Social Robots. CoRR abs/2201.02712 (2022) - [i18]Varun Chandrasekaran, Suman Banerjee, Diego Perino, Nicolas Kourtellis:
Hierarchical Federated Learning with Privacy. CoRR abs/2206.05209 (2022) - [i17]Tejumade Afonja, Lucas Bourtoule, Varun Chandrasekaran, Sageev Oore, Nicolas Papernot:
Generative Extraction of Audio Classifiers for Speaker Identification. CoRR abs/2207.12816 (2022) - [i16]Congyu Fang, Hengrui Jia, Anvith Thudi, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Varun Chandrasekaran, Nicolas Papernot:
On the Fundamental Limits of Formally (Dis)Proving Robustness in Proof-of-Learning. CoRR abs/2208.03567 (2022) - [i15]Thorsten Eisenhofer, Doreen Riepel, Varun Chandrasekaran, Esha Ghosh, Olga Ohrimenko, Nicolas Papernot:
Verifiable and Provably Secure Machine Unlearning. CoRR abs/2210.09126 (2022) - 2021
- [j1]Varun Chandrasekaran, Chuhan Gao, Brian Tang, Kassem Fawaz, Somesh Jha, Suman Banerjee:
Face-Off: Adversarial Face Obfuscation. Proc. Priv. Enhancing Technol. 2021(2): 369-390 (2021) - [c11]Hui Xu, Guanpeng Li, Homa Alemzadeh, Rakesh Bobba, Varun Chandrasekaran, David E. Evans, Nicolas Papernot, Karthik Pattabiraman, Florian Tramèr:
Fourth International Workshop on Dependable and Secure Machine Learning - DSML 2021. DSN Workshops 2021: xvi - [c10]Jayaram Raghuram, Varun Chandrasekaran, Somesh Jha, Suman Banerjee:
A General Framework For Detecting Anomalous Inputs to DNN Classifiers. ICML 2021: 8764-8775 - [c9]Varun Chandrasekaran, Suman Banerjee, Bilge Mutlu, Kassem Fawaz:
PowerCut and Obfuscator: An Exploration of the Design Space for Privacy-Preserving Interventions for Smart Speakers. SOUPS @ USENIX Security Symposium 2021: 535-552 - [c8]Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, Nicolas Papernot:
Machine Unlearning. SP 2021: 141-159 - [c7]Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, Nicolas Papernot:
Proof-of-Learning: Definitions and Practice. SP 2021: 1039-1056 - [c6]Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran, Nicolas Papernot:
Entangled Watermarks as a Defense against Model Extraction. USENIX Security Symposium 2021: 1937-1954 - [i14]Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, Nicolas Papernot:
Proof-of-Learning: Definitions and Practice. CoRR abs/2103.05633 (2021) - [i13]Varun Chandrasekaran, Darren Edge, Somesh Jha, Amit Sharma, Cheng Zhang, Shruti Tople:
Causally Constrained Data Synthesis for Private Data Release. CoRR abs/2105.13144 (2021) - [i12]Adelin Travers, Lorna Licollari, Guanghan Wang, Varun Chandrasekaran, Adam Dziedzic, David Lie, Nicolas Papernot:
On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples. CoRR abs/2108.02010 (2021) - [i11]Varun Chandrasekaran, Hengrui Jia, Anvith Thudi, Adelin Travers, Mohammad Yaghini, Nicolas Papernot:
SoK: Machine Learning Governance. CoRR abs/2109.10870 (2021) - [i10]Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, Nicolas Papernot:
Unrolling SGD: Understanding Factors Influencing Machine Unlearning. CoRR abs/2109.13398 (2021) - 2020
- [c5]Homa Alemzadeh, Rakesh Bobba, Varun Chandrasekaran, David E. Evans, Nicolas Papernot, Karthik Pattabiraman, Florian Tramèr:
Third International Workshop on Dependable and Secure Machine Learning - DSML 2020. DSN Workshops 2020: x - [c4]Varun Chandrasekaran, Kamalika Chaudhuri, Irene Giacomelli, Somesh Jha, Songbai Yan:
Exploring Connections Between Active Learning and Model Extraction. USENIX Security Symposium 2020: 1309-1326 - [i9]Sanghyun Hong, Varun Chandrasekaran, Yigitcan Kaya, Tudor Dumitras, Nicolas Papernot:
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping. CoRR abs/2002.11497 (2020) - [i8]Chuhan Gao, Varun Chandrasekaran, Kassem Fawaz, Somesh Jha:
Face-Off: Adversarial Face Obfuscation. CoRR abs/2003.08861 (2020) - [i7]Jayaram Raghuram, Varun Chandrasekaran, Somesh Jha, Suman Banerjee:
Detecting Anomalous Inputs to DNN Classifiers By Joint Statistical Testing at the Layers. CoRR abs/2007.15147 (2020)
2010 – 2019
- 2019
- [c3]Yijing Zeng, Varun Chandrasekaran, Suman Banerjee, Domenico Giustiniano:
A Framework for Analyzing Spectrum Characteristics in Large Spatio-temporal Scales. MobiCom 2019: 49:1-49:16 - [i6]Varun Chandrasekaran, Brian Tang, Varsha Pendyala, Kassem Fawaz, Somesh Jha, Xi Wu:
Enhancing ML Robustness Using Physical-World Constraints. CoRR abs/1905.10900 (2019) - [i5]Lakshya Jain, Wilson Wu, Steven Chen, Uyeong Jang, Varun Chandrasekaran, Sanjit A. Seshia, Somesh Jha:
Generating Semantic Adversarial Examples with Differentiable Rendering. CoRR abs/1910.00727 (2019) - [i4]Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, Nicolas Papernot:
Machine Unlearning. CoRR abs/1912.03817 (2019) - 2018
- [c2]Chuhan Gao, Varun Chandrasekaran, Kassem Fawaz, Suman Banerjee:
Traversing the Quagmire that is Privacy in your Smart Home. IoT S&P@SIGCOMM 2018: 22-28 - [i3]Varun Chandrasekaran, Kamalika Chaudhuri, Irene Giacomelli, Somesh Jha, Songbai Yan:
Model Extraction and Active Learning. CoRR abs/1811.02054 (2018) - [i2]Varun Chandrasekaran, Kassem Fawaz, Bilge Mutlu, Suman Banerjee:
Characterizing Privacy Perceptions of Voice Assistants: A Technology Probe Study. CoRR abs/1812.00263 (2018) - 2016
- [c1]Ashlesh Sharma, Varun Chandrasekaran, Fareeha Amjad, Dennis E. Shasha, Lakshminarayanan Subramanian:
Alphacodes: Usable, Secure Transactions with Untrusted Providers using Human Computable Puzzles. ACM DEV 2016: 5:1-5:10 - [i1]Varun Chandrasekaran, Fareeha Amjad, Ashlesh Sharma, Lakshminarayanan Subramanian:
Secure Mobile Identities. CoRR abs/1604.04667 (2016)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-26 00:59 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint