default search action
Neil Zhenqiang Gong
Person information
- affiliation: Duke University, Durham, NC, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
Journal Articles
- 2024
- [j16]Wei Sun, Tingjun Chen, Neil Gong:
SoK: Secure Human-centered Wireless Sensing. Proc. Priv. Enhancing Technol. 2024(2): 313-329 (2024) - [j15]Yixin Wu, Xinlei He, Pascal Berrang, Mathias Humbert, Michael Backes, Neil Zhenqiang Gong, Yang Zhang:
Link Stealing Attacks Against Inductive Graph Neural Networks. Proc. Priv. Enhancing Technol. 2024(4): 818-839 (2024) - 2023
- [j14]Chengbin Pang, Hongbin Liu, Yifan Wang, Neil Zhenqiang Gong, Bing Mao, Jun Xu:
Generation-based fuzzing? Don't build a new generator, reuse! Comput. Secur. 129: 103178 (2023) - 2022
- [j13]Jia Lu, Ryan Tsoi, Nan Luo, Yuanchi Ha, Shangying Wang, Minjun Kwak, Yasa Baig, Nicole Moiseyev, Shari Tian, Alison Zhang, Neil Zhenqiang Gong, Lingchong You:
Distributed information encoding and decoding using self-organized spatial patterns. Patterns 3(10): 100590 (2022) - [j12]Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, Neil Zhenqiang Gong:
FLCert: Provably Secure Federated Learning Against Poisoning Attacks. IEEE Trans. Inf. Forensics Secur. 17: 3691-3705 (2022) - 2021
- [j11]Chris Chao-Chun Cheng, Chen Shi, Neil Zhenqiang Gong, Yong Guan:
LogExtractor: Extracting digital evidence from android log messages via string and taint analysis. Digit. Investig. 37 Supplement: 301193 (2021) - 2019
- [j10]Binghui Wang, Jinyuan Jia, Le Zhang, Neil Zhenqiang Gong:
Structure-Based Sybil Detection in Social Networks via Local Rule-Based Propagation. IEEE Trans. Netw. Sci. Eng. 6(3): 523-537 (2019) - 2018
- [j9]Neil Zhenqiang Gong, Bin Liu:
Attribute Inference Attacks in Online Social Networks. ACM Trans. Priv. Secur. 21(1): 3:1-3:30 (2018) - 2017
- [j8]Hao Fu, Xing Xie, Yong Rui, Neil Zhenqiang Gong, Guangzhong Sun, Enhong Chen:
Robust Spammer Detection in Microblogs: Leveraging User Carefulness. ACM Trans. Intell. Syst. Technol. 8(6): 83:1-83:31 (2017) - 2016
- [j7]Shouling Ji, Weiqing Li, Neil Zhenqiang Gong, Prateek Mittal, Raheem A. Beyah:
Seed-Based De-Anonymizability Quantification of Social Networks. IEEE Trans. Inf. Forensics Secur. 11(7): 1398-1411 (2016) - [j6]Bin Liu, Yao Wu, Neil Zhenqiang Gong, Junjie Wu, Hui Xiong, Martin Ester:
Structural Analysis of User Choices for Mobile App Recommendation. ACM Trans. Knowl. Discov. Data 11(2): 17:1-17:23 (2016) - 2015
- [j5]Mathias Payer, Ling Huang, Neil Zhenqiang Gong, Kevin Borgolte, Mario Frank:
What You Submit Is Who You Are: A Multimodal Approach for Deanonymizing Scientific Publications. IEEE Trans. Inf. Forensics Secur. 10(1): 200-212 (2015) - 2014
- [j4]Neil Zhenqiang Gong, Wenchang Xu:
Reciprocal versus parasocial relationships in online social networks. Soc. Netw. Anal. Min. 4(1): 184 (2014) - [j3]Neil Zhenqiang Gong, Mario Frank, Prateek Mittal:
SybilBelief: A Semi-Supervised Learning Approach for Structure-Based Sybil Detection. IEEE Trans. Inf. Forensics Secur. 9(6): 976-987 (2014) - [j2]Neil Zhenqiang Gong, Di Wang:
On the Security of Trustee-Based Social Authentications. IEEE Trans. Inf. Forensics Secur. 9(8): 1251-1263 (2014) - [j1]Neil Zhenqiang Gong, Ameet Talwalkar, Lester W. Mackey, Ling Huang, Eui Chul Richard Shin, Emil Stefanov, Elaine Shi, Dawn Song:
Joint Link Prediction and Attribute Inference Using a Social-Attribute Network. ACM Trans. Intell. Syst. Technol. 5(2): 27:1-27:20 (2014)
Conference and Workshop Papers
- 2024
- [c98]Yueqi Xie, Minghong Fang, Renjie Pi, Neil Gong:
GradSafe: Detecting Jailbreak Prompts for LLMs via Safety-Critical Gradient Analysis. ACL (1) 2024: 507-518 - [c97]Wen Huang, Hongbin Liu, Minxin Guo, Neil Gong:
Visual Hallucinations of Multi-modal Large Language Models. ACL (Findings) 2024: 9614-9631 - [c96]Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
Data Poisoning Based Backdoor Attacks to Contrastive Learning. CVPR 2024: 24357-24366 - [c95]Yuqi Jia, Saeed Vahidian, Jingwei Sun, Jianyi Zhang, Vyacheslav Kungurtsev, Neil Zhenqiang Gong, Yiran Chen:
Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents. ECCV (78) 2024: 18-33 - [c94]Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan, Neil Zhenqiang Gong, Lichao Sun:
MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use. ICLR 2024 - [c93]Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, Xing Xie:
DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks. ICLR 2024 - [c92]Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Hanchi Sun, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric P. Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John C. Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao:
Position: TrustLLM: Trustworthiness in Large Language Models. ICML 2024 - [c91]Yueqi Xie, Minghong Fang, Neil Zhenqiang Gong:
FedREDefense: Defending against Model Poisoning Attacks for Federated Learning using Model Update Reconstruction Error. ICML 2024 - [c90]Hongbin Liu, Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong:
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning. SP (Workshops) 2024: 144-156 - [c89]Yuchen Yang, Bo Hui, Haolin Yuan, Neil Gong, Yinzhi Cao:
SneakyPrompt: Jailbreaking Text-to-image Generative Models. SP 2024: 897-912 - [c88]Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong:
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models. USENIX Security Symposium 2024 - [c87]Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong:
Formalizing and Benchmarking Prompt Injection Attacks and Defenses. USENIX Security Symposium 2024 - [c86]Minxue Tang, Anna Dai, Louis DiValentin, Aolin Ding, Amin Hass, Neil Zhenqiang Gong, Yiran Chen, Hai (Helen) Li:
ModelGuard: Information-Theoretic Defense Against Model Extraction Attacks. USENIX Security Symposium 2024 - [c85]Yichang Xu, Ming Yin, Minghong Fang, Neil Zhenqiang Gong:
Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks. WWW (Companion Volume) 2024: 798-801 - [c84]Ming Yin, Yichang Xu, Minghong Fang, Neil Zhenqiang Gong:
Poisoning Federated Recommender Systems with Fake Users. WWW 2024: 3555-3565 - 2023
- [c83]Zhengyuan Jiang, Jinghuai Zhang, Neil Zhenqiang Gong:
Evading Watermark based Detection of AI-Generated Content. CCS 2023: 1168-1181 - [c82]Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees. CVPR 2023: 9496-9505 - [c81]Yuchen Yang, Haolin Yuan, Bo Hui, Neil Zhenqiang Gong, Neil Fendley, Philippe Burlina, Yinzhi Cao:
Fortifying Federated Learning against Membership Inference Attacks via Client-level Input Perturbation. DSN 2023: 288-301 - [c80]Zhengyuan Jiang, Minghong Fang, Neil Zhenqiang Gong:
IPCert: Provably Robust Intellectual Property Protection for Machine Learning. ICCV (Workshops) 2023: 3614-3623 - [c79]Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong:
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service. NDSS 2023 - [c78]Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, Neil Zhenqiang Gong:
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information. SP 2023: 1366-1383 - [c77]Yuchen Yang, Bo Hui, Haolin Yuan, Neil Zhenqiang Gong, Yinzhi Cao:
PrivateFL: Accurate, Differentially Private Federated Learning via Personalized Data Transformation. USENIX Security Symposium 2023: 1595-1612 - [c76]Jinyuan Jia, Yupei Liu, Yuepeng Hu, Neil Zhenqiang Gong:
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks. USENIX Security Symposium 2023: 1703-1720 - [c75]Xiaoguang Li, Ninghui Li, Wenhai Sun, Neil Zhenqiang Gong, Hui Li:
Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation. USENIX Security Symposium 2023: 1739-1756 - 2022
- [c74]Jinyuan Jia, Yupei Liu, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks. AAAI 2022: 9575-9583 - [c73]Minghong Fang, Jia Liu, Neil Zhenqiang Gong, Elizabeth S. Bentley:
AFLGuard: Byzantine-robust Asynchronous Federated Learning. ACSAC 2022: 632-646 - [c72]Binghui Wang, Tianchen Zhou, Song Li, Yinzhi Cao, Neil Zhenqiang Gong:
GraphTrack: A Graph-based Cross-Device Tracking Framework. AsiaCCS 2022: 82-96 - [c71]Da Zhong, Haipei Sun, Jun Xu, Neil Zhenqiang Gong, Wendy Hui Wang:
Understanding Disparate Effects of Membership Inference Attacks and their Countermeasures. AsiaCCS 2022: 959-974 - [c70]Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning. CCS 2022: 2115-2128 - [c69]Jiyu Chen, Yiwen Guo, Hao Chen, Neil Gong:
Membership Inference Attack in Face of Data Transformations. CNS 2022: 299-307 - [c68]Xiaoyu Cao, Neil Zhenqiang Gong:
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients. CVPR Workshops 2022: 3395-3403 - [c67]Huanrui Yang, Xiaoxuan Yang, Neil Zhenqiang Gong, Yiran Chen:
HERO: hessian-enhanced robust optimization for unifying and improving generalization and quantization performance. DAC 2022: 25-30 - [c66]Haolin Yuan, Bo Hui, Yuchen Yang, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao:
Addressing Heterogeneity in Federated Learning via Distributional Transformation. ECCV (38) 2022: 179-195 - [c65]Xinlei He, Hongbin Liu, Neil Zhenqiang Gong, Yang Zhang:
Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning. ECCV (31) 2022: 365-381 - [c64]Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong:
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations. ICLR 2022 - [c63]Aritra Ray, Jinyuan Jia, Sohini Saha, Jayeeta Chaudhuri, Neil Zhenqiang Gong, Krishnendu Chakrabarty:
Deep Neural Network Piration without Accuracy Loss. ICMLA 2022: 1032-1038 - [c62]Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. KDD 2022: 2545-2555 - [c61]Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong:
MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples. NeurIPS 2022 - [c60]Jinyuan Jia, Yupei Liu, Neil Zhenqiang Gong:
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning. SP 2022: 2043-2059 - [c59]Yongji Wu, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data. USENIX Security Symposium 2022: 519-536 - [c58]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning. USENIX Security Symposium 2022: 3629-3645 - 2021
- [c57]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Provably Secure Federated Learning against Malicious Clients. AAAI 2021: 6885-6893 - [c56]Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong:
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks. AAAI 2021: 7961-7969 - [c55]Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong:
Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks. AAAI 2021: 10093-10101 - [c54]Zijie Yang, Binghui Wang, Haoran Li, Dong Yuan, Zhuotao Liu, Neil Zhenqiang Gong, Chang Liu, Qi Li, Xiao Liang, Shaofeng Hu:
On Detecting Growing-Up Behaviors of Malicious Accounts in Privacy-Centric Mobile Social Networks. ACSAC 2021: 297-310 - [c53]Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong:
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes. AsiaCCS 2021: 2-13 - [c52]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary. AsiaCCS 2021: 14-25 - [c51]Hongbin Liu, Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong:
EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning. CCS 2021: 2081-2095 - [c50]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
PointGuard: Provably Robust 3D Point Cloud Classification. CVPR 2021: 6186-6195 - [c49]Xiaoyu Cao, Neil Zhenqiang Gong:
Understanding the Security of Deepfake Detection. ICDF2C 2021: 360-378 - [c48]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
On the Intrinsic Differential Privacy of Bagging. IJCAI 2021: 2730-2736 - [c47]Binghui Wang, Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation. KDD 2021: 1645-1653 - [c46]Xiao Liang, Zheng Yang, Binghui Wang, Shaofeng Hu, Zijie Yang, Dong Yuan, Neil Zhenqiang Gong, Qi Li, Fang He:
Unveiling Fake Accounts at the Time of Registration: An Unsupervised Approach. KDD 2021: 3240-3250 - [c45]Xiaoyu Cao, Minghong Fang, Jia Liu, Neil Zhenqiang Gong:
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. NDSS 2021 - [c44]Hai Huang, Jiaming Mu, Neil Zhenqiang Gong, Qi Li, Bin Liu, Mingwei Xu:
Data Poisoning Attacks to Deep Learning Based Recommender Systems. NDSS 2021 - [c43]Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao:
Practical Blind Membership Inference Attack via Differential Comparisons. NDSS 2021 - [c42]Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong:
Backdoor Attacks to Graph Neural Networks. SACMAT 2021: 15-26 - [c41]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Data Poisoning Attacks to Local Differential Privacy Protocols. USENIX Security Symposium 2021: 947-964 - [c40]Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, Yang Zhang:
Stealing Links from Graph Neural Networks. USENIX Security Symposium 2021: 2669-2686 - [c39]Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, Jia Liu:
Data Poisoning Attacks and Defenses to Crowdsourcing Systems. WWW 2021: 969-980 - [c38]Yongji Wu, Defu Lian, Neil Zhenqiang Gong, Lu Yin, Mingyang Yin, Jingren Zhou, Hongxia Yang:
Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation. WWW 2021: 1262-1273 - 2020
- [c37]Jinyuan Jia, Xiaoyu Cao, Binghui Wang, Neil Zhenqiang Gong:
Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing. ICLR 2020 - [c36]Luke Myers, Binghui Wang, Neil Zhenqiang Gong, Daji Qiao:
State Estimation via Inference on a Probabilistic Graphical Model - A Different Perspective. ISGT 2020: 1-5 - [c35]Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. USENIX Security Symposium 2020: 1605-1622 - [c34]Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing. WWW 2020: 2718-2724 - [c33]Minghong Fang, Neil Zhenqiang Gong, Jia Liu:
Influence Function based Data Poisoning Attacks to Top-N Recommender Systems. WWW 2020: 3019-3025 - 2019
- [c32]Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, Neil Zhenqiang Gong:
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples. CCS 2019: 259-274 - [c31]Dong Yuan, Yuanli Miao, Neil Zhenqiang Gong, Zheng Yang, Qi Li, Dawn Song, Qian Wang, Xiao Liang:
Detecting Fake Accounts in Online Social Networks at the Time of Registrations. CCS 2019: 1423-1438 - [c30]Binghui Wang, Neil Zhenqiang Gong:
Attacking Graph-based Classification via Manipulating the Graph Structure. CCS 2019: 2023-2040 - [c29]Jinyuan Jia, Neil Zhenqiang Gong:
Calibrate: Frequency Estimation and Heavy Hitter Identification with Local Differential Privacy via Incorporating Prior Knowledge. INFOCOM 2019: 2008-2016 - [c28]Zenghua Xia, Chang Liu, Neil Zhenqiang Gong, Qi Li, Yong Cui, Dawn Song:
Characterizing and Detecting Malicious Accounts in Privacy-Centric Mobile Social Networks: A Case Study. KDD 2019: 2012-2022 - [c27]Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong:
Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation. NDSS 2019 - 2018
- [c26]Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, Jia Liu:
Poisoning Attacks to Graph-Based Recommender Systems. ACSAC 2018: 381-392 - [c25]Chris Chao-Chun Cheng, Chen Shi, Neil Zhenqiang Gong, Yong Guan:
EviHunter: Identifying Digital Evidence in the Permanent Storage of Android Devices via Static Analysis. CCS 2018: 1338-1350 - [c24]Peng Gao, Binghui Wang, Neil Zhenqiang Gong, Sanjeev R. Kulkarni, Kurt Thomas, Prateek Mittal:
SYBILFUSE: Combining Local Attributes with Global Structure to Perform Robust Sybil Detection. CNS 2018: 1-9 - [c23]Binghui Wang, Le Zhang, Neil Zhenqiang Gong:
SybilBlind: Detecting Fake Users in Online Social Networks Without Manual Labels. RAID 2018: 228-249 - [c22]Binghui Wang, Neil Zhenqiang Gong:
Stealing Hyperparameters in Machine Learning. IEEE Symposium on Security and Privacy 2018: 36-52 - [c21]Zhen Xu, Chen Shi, Chris Chao-Chun Cheng, Neil Zhenqiang Gong, Yong Guan:
A Dynamic Taint Analysis Tool for Android App Forensics. IEEE Symposium on Security and Privacy Workshops 2018: 160-169 - [c20]Jinyuan Jia, Neil Zhenqiang Gong:
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning. USENIX Security Symposium 2018: 513-529 - 2017
- [c19]Xiaoyu Cao, Neil Zhenqiang Gong:
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification. ACSAC 2017: 278-287 - [c18]Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong:
Random Walk Based Fake Account Detection in Online Social Networks. DSN 2017: 273-284 - [c17]Neil Zhenqiang Gong, Altay Ozen, Yu Wu, Xiaoyu Cao, Richard Shin, Dawn Song, Hongxia Jin, Xuan Bao:
PIANO: Proximity-Based User Authentication on Voice-Powered Internet-of-Things Devices. ICDCS 2017: 2212-2219 - [c16]Binghui Wang, Neil Zhenqiang Gong, Hao Fu:
GANG: Detecting Fraudulent Users in Online Social Networks via Guilt-by-Association on Directed Graphs. ICDM 2017: 465-474 - [c15]Binghui Wang, Le Zhang, Neil Zhenqiang Gong:
SybilSCAR: Sybil detection in online social networks via local rule based propagation. INFOCOM 2017: 1-9 - [c14]Guolei Yang, Neil Zhenqiang Gong, Ying Cai:
Fake Co-visitation Injection Attacks to Recommender Systems. NDSS 2017 - [c13]Jinyuan Jia, Binghui Wang, Le Zhang, Neil Zhenqiang Gong:
AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields. WWW 2017: 1561-1569 - 2016
- [c12]Neil Zhenqiang Gong, Mathias Payer, Reza Moazzezi, Mario Frank:
Forgery-Resistant Touch-based Authentication on Mobile Devices. AsiaCCS 2016: 499-510 - [c11]Neil Zhenqiang Gong, Bin Liu:
You Are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors. USENIX Security Symposium 2016: 979-995 - 2015
- [c10]Bing Hu, Bin Liu, Neil Zhenqiang Gong, Deguang Kong, Hongxia Jin:
Protecting Your Children from Inappropriate Content in Mobile Apps: An Automatic Maturity Rating Framework. CIKM 2015: 1111-1120 - [c9]Shouling Ji, Weiqing Li, Neil Zhenqiang Gong, Prateek Mittal, Raheem A. Beyah:
On Your Social Network De-anonymizablity: Quantification and Large Scale Evaluation with Seed Knowledge. NDSS 2015 - [c8]Bin Liu, Deguang Kong, Lei Cen, Neil Zhenqiang Gong, Hongxia Jin, Hui Xiong:
Personalized Mobile App Recommendation: Reconciling App Functionality and User Privacy Preference. WSDM 2015: 315-324 - 2014
- [c7]Xuan Bao, Yilin Shen, Neil Zhenqiang Gong, Hongxia Jin, Bing Hu:
Connect the dots by understanding user status and transitions. UbiComp Adjunct 2014: 361-366 - 2012
- [c6]Neil Zhenqiang Gong, Wenchang Xu, Ling Huang, Prateek Mittal, Emil Stefanov, Vyas Sekar, Dawn Song:
Evolution of social-attribute networks: measurements, modeling, and implications using google+. Internet Measurement Conference 2012: 131-144 - [c5]Arvind Narayanan, Hristo S. Paskov, Neil Zhenqiang Gong, John Bethencourt, Emil Stefanov, Eui Chul Richard Shin, Dawn Song:
On the Feasibility of Internet-Scale Author Identification. IEEE Symposium on Security and Privacy 2012: 300-314 - 2011
- [c4]Dongqu Chen, Guang-Zhong Sun, Neil Zhenqiang Gong:
Efficient Approximate Top-k Query Algorithm Using Cube Index. APWeb 2011: 155-167 - [c3]Dongqu Chen, Guang-Zhong Sun, Neil Zhenqiang Gong, Xiaoqiang Zhong:
Efficient Top-K Query Algorithms Using Density Index. ICAIC (1) 2011: 38-45 - 2010
- [c2]Neil Zhenqiang Gong, Guangzhong Sun, Xing Xie:
Protecting Privacy in Location-Based Services Using K-Anonymity without Cloaked Region. Mobile Data Management 2010: 366-371 - 2009
- [c1]Neil Zhenqiang Gong, Guangzhong Sun, Jing Yuan, Yanjing Zhong:
Efficient Top-k Query Algorithms Using K-Skyband Partition. Infoscale 2009: 288-305
Parts in Books or Collections
- 2020
- [p1]Jinyuan Jia, Neil Zhenqiang Gong:
Defending Against Machine Learning Based Inference Attacks via Adversarial Examples: Opportunities and Challenges. Adaptive Autonomous Secure Cyber Systems 2020: 23-40
Informal and Other Publications
- 2024
- [i108]Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric P. Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, John C. Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yue Zhao:
TrustLLM: Trustworthiness in Large Language Models. CoRR abs/2401.05561 (2024) - [i107]Ming Yin, Yichang Xu, Minghong Fang, Neil Zhenqiang Gong:
Poisoning Federated Recommender Systems with Fake Users. CoRR abs/2402.11637 (2024) - [i106]Yueqi Xie, Minghong Fang, Renjie Pi, Neil Zhenqiang Gong:
GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis. CoRR abs/2402.13494 (2024) - [i105]Wen Huang, Hongbin Liu, Minxin Guo, Neil Zhenqiang Gong:
Visual Hallucinations of Multi-modal Large Language Models. CoRR abs/2402.14683 (2024) - [i104]Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong:
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models. CoRR abs/2402.14977 (2024) - [i103]Yichang Xu, Ming Yin, Minghong Fang, Neil Zhenqiang Gong:
Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks. CoRR abs/2403.03149 (2024) - [i102]Yuepeng Hu, Zhengyuan Jiang, Moyang Guo, Neil Zhenqiang Gong:
A Transfer Attack to Image Watermarks. CoRR abs/2403.15365 (2024) - [i101]Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, Neil Zhenqiang Gong:
Optimization-based Prompt Injection Attack to LLM-as-a-Judge. CoRR abs/2403.17710 (2024) - [i100]Zhengyuan Jiang, Moyang Guo, Yuepeng Hu, Neil Zhenqiang Gong:
Watermark-based Detection and Attribution of AI-Generated Content. CoRR abs/2404.04254 (2024) - [i99]Jiacheng Du, Jiahui Hu, Zhibo Wang, Peng Sun, Neil Zhenqiang Gong, Kui Ren:
SoK: Gradient Leakage in Federated Learning. CoRR abs/2404.05403 (2024) - [i98]Yueqi Xie, Minghong Fang, Neil Zhenqiang Gong:
PoisonedFL: Model Poisoning Attacks to Federated Learning via Multi-Round Consistency. CoRR abs/2404.15611 (2024) - [i97]Yixin Wu, Xinlei He, Pascal Berrang, Mathias Humbert, Michael Backes, Neil Zhenqiang Gong, Yang Zhang:
Link Stealing Attacks Against Inductive Graph Neural Networks. CoRR abs/2405.05784 (2024) - [i96]Yujie Zhang, Neil Gong, Michael K. Reiter:
Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning. CoRR abs/2405.06206 (2024) - [i95]Bo Hui, Haolin Yuan, Neil Zhenqiang Gong, Philippe Burlina, Yinzhi Cao:
PLeak: Prompt Leaking Attacks against Large Language Model Applications. CoRR abs/2405.06823 (2024) - [i94]Yuepeng Hu, Zhengyuan Jiang, Moyang Guo, Neil Zhenqiang Gong:
Stable Signature is Unstable: Removing Image Watermark from Diffusion Models. CoRR abs/2405.07145 (2024) - [i93]Hongbin Liu, Moyang Guo, Zhengyuan Jiang, Lun Wang, Neil Zhenqiang Gong:
AudioMarkBench: Benchmarking Robustness of Audio Watermarking. CoRR abs/2406.06979 (2024) - [i92]Minghong Fang, Zifan Zhang, Hairi, Prashant Khanduri, Jia Liu, Songtao Lu, Yuchen Liu, Neil Zhenqiang Gong:
Byzantine-Robust Decentralized Federated Learning. CoRR abs/2406.10416 (2024) - [i91]Roy Xie, Junlin Wang, Ruomin Huang, Minxing Zhang, Rong Ge, Jian Pei, Neil Zhenqiang Gong, Bhuwan Dhingra:
ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods. CoRR abs/2406.15968 (2024) - [i90]Dongping Chen, Jiawen Shi, Yao Wan, Pan Zhou, Neil Zhenqiang Gong, Lichao Sun:
Self-Cognition in Large Language Models: An Exploratory Study. CoRR abs/2407.01505 (2024) - [i89]Zhengyuan Jiang, Moyang Guo, Yuepeng Hu, Jinyuan Jia, Neil Zhenqiang Gong:
Certifiably Robust Image Watermark. CoRR abs/2407.04086 (2024) - [i88]Yuqi Jia, Minghong Fang, Hongbin Liu, Jinghuai Zhang, Neil Zhenqiang Gong:
Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning. CoRR abs/2407.07221 (2024) - [i87]Zedian Shao, Hongbin Liu, Yuepeng Hu, Neil Zhenqiang Gong:
Refusing Safe Prompts for Multi-modal Large Language Models. CoRR abs/2407.09050 (2024) - [i86]Mihai Christodorescu, Ryan Craven, Soheil Feizi, Neil Gong, Mia Hoffmann, Somesh Jha, Zhengyuan Jiang, Mehrdad Saberi Kamarposhti, John C. Mitchell, Jessica Newman, Emelia Probasco, Yanjun Qi, Khawaja Shams, Matthew Turek:
Securing the Future of GenAI: Policy and Technology. CoRR abs/2407.12999 (2024) - [i85]Zonghao Huang, Neil Zhenqiang Gong, Michael K. Reiter:
A General Framework for Data-Use Auditing of ML Models. CoRR abs/2407.15100 (2024) - [i84]Yupei Liu, Yuqi Jia, Jinyuan Jia, Neil Zhenqiang Gong:
Evaluating Large Language Model based Personal Information Extraction and Countermeasures. CoRR abs/2408.07291 (2024) - [i83]Mihai Christodorescu, Ryan Craven, Soheil Feizi, Neil Zhenqiang Gong, Mia Hoffmann, Somesh Jha, Zhengyuan Jiang, Mehrdad Saberi Kamarposhti, John C. Mitchell, Jessica Newman, Emelia Probasco, Yanjun Qi, Khawaja Shams, Matthew Turek:
Securing the Future of GenAI: Policy and Technology. IACR Cryptol. ePrint Arch. 2024: 855 (2024) - 2023
- [i82]Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong:
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service. CoRR abs/2301.02905 (2023) - [i81]Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees. CoRR abs/2303.01959 (2023) - [i80]Jinyuan Jia, Yupei Liu, Yuepeng Hu, Neil Zhenqiang Gong:
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks. CoRR abs/2303.14601 (2023) - [i79]Zhengyuan Jiang, Jinghuai Zhang, Neil Zhenqiang Gong:
Evading Watermark based Detection of AI-Generated Content. CoRR abs/2305.03807 (2023) - [i78]Yuchen Yang, Bo Hui, Haolin Yuan, Neil Zhenqiang Gong, Yinzhi Cao:
SneakyPrompt: Evaluating Robustness of Text-to-image Generative Models' Safety Filters. CoRR abs/2305.12082 (2023) - [i77]Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, Xing Xie:
PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts. CoRR abs/2306.04528 (2023) - [i76]Minglei Yin, Bin Liu, Neil Zhenqiang Gong, Xin Li:
Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework. CoRR abs/2306.07992 (2023) - [i75]Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, Xing Xie:
DyVal: Graph-informed Dynamic Evaluation of Large Language Models. CoRR abs/2309.17167 (2023) - [i74]Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan, Neil Zhenqiang Gong, Lichao Sun:
MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use. CoRR abs/2310.03128 (2023) - [i73]Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong:
Prompt Injection Attacks and Defenses in LLM-Integrated Applications. CoRR abs/2310.12815 (2023) - [i72]Yuqi Jia, Minghong Fang, Neil Zhenqiang Gong:
Competitive Advantage Attacks to Decentralized Federated Learning. CoRR abs/2310.13862 (2023) - [i71]Zonghao Huang, Neil Gong, Michael K. Reiter:
Mendata: A Framework to Purify Manipulated Training Data. CoRR abs/2312.01281 (2023) - [i70]Yuqi Jia, Saeed Vahidian, Jingwei Sun, Jianyi Zhang, Vyacheslav Kungurtsev, Neil Zhenqiang Gong, Yiran Chen:
Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents. CoRR abs/2312.01537 (2023) - 2022
- [i69]Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
StolenEncoder: Stealing Pre-trained Encoders. CoRR abs/2201.05889 (2022) - [i68]Binghui Wang, Tianchen Zhou, Song Li, Yinzhi Cao, Neil Zhenqiang Gong:
GraphTrack: A Graph-based Cross-Device Tracking Framework. CoRR abs/2203.06833 (2022) - [i67]Xiaoyu Cao, Neil Zhenqiang Gong:
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients. CoRR abs/2203.08669 (2022) - [i66]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning. CoRR abs/2205.06401 (2022) - [i65]Xiaoguang Li, Neil Zhenqiang Gong, Ninghui Li, Wenhai Sun, Hui Li:
Fine-grained Poisoning Attacks to Local Differential Privacy Protocols for Mean and Variance Estimation. CoRR abs/2205.11782 (2022) - [i64]Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. CoRR abs/2207.09209 (2022) - [i63]Xinlei He, Hongbin Liu, Neil Zhenqiang Gong, Yang Zhang:
Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning. CoRR abs/2207.12535 (2022) - [i62]Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, Neil Zhenqiang Gong:
FLCert: Provably Secure Federated Learning against Poisoning Attacks. CoRR abs/2210.00584 (2022) - [i61]Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong:
MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples. CoRR abs/2210.01111 (2022) - [i60]Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, Neil Zhenqiang Gong:
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information. CoRR abs/2210.10936 (2022) - [i59]Haolin Yuan, Bo Hui, Yuchen Yang, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao:
Addressing Heterogeneity in Federated Learning via Distributional Transformation. CoRR abs/2210.15025 (2022) - [i58]Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning. CoRR abs/2211.08229 (2022) - [i57]Wei Sun, Tingjun Chen, Neil Gong:
SoK: Inference Attacks and Defenses in Human-Centered Wireless Sensing. CoRR abs/2211.12087 (2022) - [i56]Hongbin Liu, Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong:
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning. CoRR abs/2212.03334 (2022) - [i55]Minghong Fang, Jia Liu, Neil Zhenqiang Gong, Elizabeth S. Bentley:
AFLGuard: Byzantine-robust Asynchronous Federated Learning. CoRR abs/2212.06325 (2022) - 2021
- [i54]Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao:
Practical Blind Membership Inference Attack via Differential Comparisons. CoRR abs/2101.01341 (2021) - [i53]Hai Huang, Jiaming Mu, Neil Zhenqiang Gong, Qi Li, Bin Liu, Mingwei Xu:
Data Poisoning Attacks to Deep Learning Based Recommender Systems. CoRR abs/2101.02644 (2021) - [i52]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Provably Secure Federated Learning against Malicious Clients. CoRR abs/2102.01854 (2021) - [i51]Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, Jia Liu:
Data Poisoning Attacks and Defenses to Crowdsourcing Systems. CoRR abs/2102.09171 (2021) - [i50]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
PointGuard: Provably Robust 3D Point Cloud Classification. CoRR abs/2103.03046 (2021) - [i49]Yongji Wu, Lu Yin, Defu Lian, Mingyang Yin, Neil Zhenqiang Gong, Jingren Zhou, Hongxia Yang:
Rethinking Lifelong Sequential Recommendation with Incremental Multi-Interest Attention. CoRR abs/2105.14060 (2021) - [i48]Yongji Wu, Defu Lian, Neil Zhenqiang Gong, Lu Yin, Mingyang Yin, Jingren Zhou, Hongxia Yang:
Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation. CoRR abs/2105.14068 (2021) - [i47]Xiaoyu Cao, Neil Zhenqiang Gong:
Understanding the Security of Deepfake Detection. CoRR abs/2107.02045 (2021) - [i46]Jinyuan Jia, Yupei Liu, Neil Zhenqiang Gong:
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning. CoRR abs/2108.00352 (2021) - [i45]Hongbin Liu, Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong:
EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning. CoRR abs/2108.11023 (2021) - [i44]Yuankun Yang, Chenyue Liang, Hongyu He, Xiaoyu Cao, Neil Zhenqiang Gong:
FaceGuard: Proactive Deepfake Detection. CoRR abs/2109.05673 (2021) - [i43]Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
10 Security and Privacy Problems in Self-Supervised Learning. CoRR abs/2110.15444 (2021) - [i42]Yongji Wu, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data. CoRR abs/2111.11534 (2021) - [i41]Huanrui Yang, Xiaoxuan Yang, Neil Zhenqiang Gong, Yiran Chen:
HERO: Hessian-Enhanced Robust Optimization for Unifying and Improving Generalization and Quantization Performance. CoRR abs/2111.11986 (2021) - 2020
- [i40]Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing. CoRR abs/2002.03421 (2020) - [i39]Minghong Fang, Neil Zhenqiang Gong, Jia Liu:
Influence Function based Data Poisoning Attacks to Top-N Recommender Systems. CoRR abs/2002.08025 (2020) - [i38]Binghui Wang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
On Certifying Robustness against Backdoor Attacks via Randomized Smoothing. CoRR abs/2002.11750 (2020) - [i37]Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, Yang Zhang:
Stealing Links from Graph Neural Networks. CoRR abs/2005.02131 (2020) - [i36]Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong:
Backdoor Attacks to Graph Neural Networks. CoRR abs/2006.11165 (2020) - [i35]Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong:
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks. CoRR abs/2008.04495 (2020) - [i34]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
On the Intrinsic Differential Privacy of Bagging. CoRR abs/2008.09845 (2020) - [i33]Binghui Wang, Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation. CoRR abs/2008.10715 (2020) - [i32]Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong:
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes. CoRR abs/2010.13751 (2020) - [i31]Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong:
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations. CoRR abs/2011.07633 (2020) - [i30]Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Nearest Neighbors against Data Poisoning Attacks. CoRR abs/2012.03765 (2020) - [i29]Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong:
Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks. CoRR abs/2012.13085 (2020) - [i28]Xiaoyu Cao, Minghong Fang, Jia Liu, Neil Zhenqiang Gong:
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. CoRR abs/2012.13995 (2020) - 2019
- [i27]Binghui Wang, Neil Zhenqiang Gong:
Attacking Graph-based Classification via Manipulating the Graph Structure. CoRR abs/1903.00553 (2019) - [i26]Jinyuan Jia, Neil Zhenqiang Gong:
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges. CoRR abs/1909.08526 (2019) - [i25]Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, Neil Zhenqiang Gong:
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples. CoRR abs/1909.10594 (2019) - [i24]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
IPGuard: Protecting the Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary. CoRR abs/1910.12903 (2019) - [i23]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Data Poisoning Attacks to Local Differential Privacy Protocols. CoRR abs/1911.02046 (2019) - [i22]Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. CoRR abs/1911.11815 (2019) - [i21]Jinyuan Jia, Xiaoyu Cao, Binghui Wang, Neil Zhenqiang Gong:
Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing. CoRR abs/1912.09899 (2019) - 2018
- [i20]Binghui Wang, Neil Zhenqiang Gong:
Stealing Hyperparameters in Machine Learning. CoRR abs/1802.05351 (2018) - [i19]Binghui Wang, Jinyuan Jia, Le Zhang, Neil Zhenqiang Gong:
Structure-based Sybil Detection in Social Networks via Local Rule-based Propagation. CoRR abs/1803.04321 (2018) - [i18]Peng Gao, Binghui Wang, Neil Zhenqiang Gong, Sanjeev R. Kulkarni, Kurt Thomas, Prateek Mittal:
SybilFuse: Combining Local Attributes with Global Structure to Perform Robust Sybil Detection. CoRR abs/1803.06772 (2018) - [i17]Jinyuan Jia, Neil Zhenqiang Gong:
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning. CoRR abs/1805.04810 (2018) - [i16]Binghui Wang, Le Zhang, Neil Zhenqiang Gong:
SybilBlind: Detecting Fake Users in Online Social Networks without Manual Labels. CoRR abs/1806.04853 (2018) - [i15]Chris Chao-Chun Cheng, Chen Shi, Neil Zhenqiang Gong, Yong Guan:
EviHunter: Identifying Digital Evidence in the Permanent Storage of Android Devices via Static Analysis. CoRR abs/1808.06137 (2018) - [i14]Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, Jia Liu:
Poisoning Attacks to Graph-Based Recommender Systems. CoRR abs/1809.04127 (2018) - [i13]Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong:
Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation. CoRR abs/1812.01661 (2018) - [i12]Jinyuan Jia, Neil Zhenqiang Gong:
Calibrate: Frequency Estimation and Heavy Hitter Identification with Local Differential Privacy via Incorporating Prior Knowledge. CoRR abs/1812.02055 (2018) - 2017
- [i11]Neil Zhenqiang Gong, Altay Ozen, Yu Wu, Xiaoyu Cao, Eui Chul Richard Shin, Dawn Xiaodong Song, Hongxia Jin, Xuan Bao:
PIANO: Proximity-based User Authentication on Voice-Powered Internet-of-Things Devices. CoRR abs/1704.03118 (2017) - [i10]Xiaoyu Cao, Neil Zhenqiang Gong:
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification. CoRR abs/1709.05583 (2017) - 2016
- [i9]Bin Liu, Yao Wu, Neil Zhenqiang Gong, Junjie Wu, Hui Xiong, Martin Ester:
Structural Analysis of User Choices for Mobile App Recommendation. CoRR abs/1605.07980 (2016) - [i8]Neil Zhenqiang Gong, Bin Liu:
You are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors. CoRR abs/1606.05893 (2016) - 2015
- [i7]Peng Gao, Neil Zhenqiang Gong, Sanjeev R. Kulkarni, Kurt Thomas, Prateek Mittal:
SybilFrame: A Defense-in-Depth Framework for Structure-Based Sybil Detection. CoRR abs/1503.02985 (2015) - [i6]Neil Zhenqiang Gong, Mathias Payer, Reza Moazzezi, Mario Frank:
Towards Forgery-Resistant Touch-based Biometric Authentication on Mobile Devices. CoRR abs/1506.02294 (2015) - 2014
- [i5]Neil Zhenqiang Gong, Di Wang:
On the Security of Trustee-based Social Authentications. CoRR abs/1402.2699 (2014) - 2013
- [i4]Neil Zhenqiang Gong, Wenchang Xu, Dawn Song:
Reciprocity in Social Networks: Measurements, Predictions, and Implications. CoRR abs/1302.6309 (2013) - [i3]Neil Zhenqiang Gong, Mario Frank, Prateek Mittal:
SybilBelief: A Semi-supervised Learning Approach for Structure-based Sybil Detection. CoRR abs/1312.5035 (2013) - 2012
- [i2]Neil Zhenqiang Gong, Wenchang Xu, Ling Huang, Prateek Mittal, Emil Stefanov, Vyas Sekar, Dawn Song:
Evolution of Social-Attribute Networks: Measurements, Modeling, and Implications using Google+. CoRR abs/1209.0835 (2012) - 2011
- [i1]Neil Zhenqiang Gong, Ameet Talwalkar, Lester W. Mackey, Ling Huang, Eui Chul Richard Shin, Emil Stefanov, Elaine Shi, Dawn Song:
Predicting Links and Inferring Attributes using a Social-Attribute Network (SAN). CoRR abs/1112.3265 (2011)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-31 20:12 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint