default search action
Nicolas Papernot
Person information
- affiliation: University of Toronto, Canada
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j10]Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Nicolas Papernot, Ross J. Anderson, Yarin Gal:
AI models collapse when trained on recursively generated data. Nat. 631(8022): 755-759 (2024) - [j9]Anvith Thudi, Ilia Shumailov, Franziska Boenisch, Nicolas Papernot:
From Differential Privacy to Bounds on Membership Inference: Less can be More. Trans. Mach. Learn. Res. 2024 (2024) - [j8]Jiapeng Wu, Atiyeh Ashari Ghomi, David Glukhov, Jesse C. Cresswell, Franziska Boenisch, Nicolas Papernot:
Augment then Smooth: Reconciling Differential Privacy with Certified Robustness. Trans. Mach. Learn. Res. 2024 (2024) - [c76]Sierra Calanda Wyllie, Ilia Shumailov, Nicolas Papernot:
Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias. FAccT 2024: 2113-2147 - [c75]Ali Shahin Shamsabadi, Gefei Tan, Tudor Cebere, Aurélien Bellet, Hamed Haddadi, Nicolas Papernot, Xiao Wang, Adrian Weller:
Confidential-DPproof: Confidential Proof of Differentially Private Training. ICLR 2024 - [c74]Wenhao Wang, Muhammad Ahmad Kaleem, Adam Dziedzic, Michael Backes, Nicolas Papernot, Franziska Boenisch:
Memorization in Self-Supervised Learning Improves Downstream Generalization. ICLR 2024 - [c73]Karan Chadha, Matthew Jagielski, Nicolas Papernot, Christopher A. Choquette-Choo, Milad Nasr:
Auditing Private Prediction. ICML 2024 - [c72]David Glukhov, Ilia Shumailov, Yarin Gal, Nicolas Papernot, Vardan Papyan:
Position: Fundamental Limitations of LLM Censorship Necessitate New Approaches. ICML 2024 - [c71]Theresa Stadler, Bogdan Kulynych, Michael Gastpar, Nicolas Papernot, Carmela Troncoso:
The Fundamental Limits of Least-Privilege Learning. ICML 2024 - [c70]Anvith Thudi, Hengrui Jia, Casey Meehan, Ilia Shumailov, Nicolas Papernot:
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD. USENIX Security Symposium 2024 - [c69]Mingyue Yang, David Lie, Nicolas Papernot:
Exploring Strategies for Guiding Symbolic Analysis with Machine Learning Prediction. SANER 2024: 659-669 - [i108]Wenhao Wang, Muhammad Ahmad Kaleem, Adam Dziedzic, Michael Backes, Nicolas Papernot, Franziska Boenisch:
Memorization in Self-Supervised Learning Improves Downstream Generalization. CoRR abs/2401.12233 (2024) - [i107]Congyu Fang, Adam Dziedzic, Lin Zhang, Laura Oliva, Amol A. Verma, Fahad Razak, Nicolas Papernot, Bo Wang:
Decentralised, Collaborative, and Privacy-preserving Machine Learning for Multi-Hospital Data. CoRR abs/2402.00205 (2024) - [i106]Andrei Ioan Muresanu, Anvith Thudi, Michael R. Zhang, Nicolas Papernot:
Unlearnable Algorithms for In-context Learning. CoRR abs/2402.00751 (2024) - [i105]Mohammad Yaghini, Patty Liu, Franziska Boenisch, Nicolas Papernot:
Regulation Games for Trustworthy Machine Learning. CoRR abs/2402.03540 (2024) - [i104]Harry Langford, Ilia Shumailov, Yiren Zhao, Robert D. Mullins, Nicolas Papernot:
Architectural Neural Backdoors from First Principles. CoRR abs/2402.06957 (2024) - [i103]Karan Chadha, Matthew Jagielski, Nicolas Papernot, Christopher A. Choquette-Choo, Milad Nasr:
Auditing Private Prediction. CoRR abs/2402.09403 (2024) - [i102]Theresa Stadler, Bogdan Kulynych, Nicolas Papernot, Michael Gastpar, Carmela Troncoso:
The Fundamental Limits of Least-Privilege Learning. CoRR abs/2402.12235 (2024) - [i101]Jamie Hayes, Ilia Shumailov, Eleni Triantafillou, Amr Khalifa, Nicolas Papernot:
Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy. CoRR abs/2403.01218 (2024) - [i100]Sierra Calanda Wyllie, Ilia Shumailov, Nicolas Papernot:
Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias. CoRR abs/2403.07857 (2024) - [i99]Tudor Cebere, Aurélien Bellet, Nicolas Papernot:
Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model. CoRR abs/2405.14457 (2024) - [i98]Pratyush Maini, Hengrui Jia, Nicolas Papernot, Adam Dziedzic:
LLM Dataset Inference: Did you train on my dataset? CoRR abs/2406.06443 (2024) - [i97]Ilia Shumailov, Jamie Hayes, Eleni Triantafillou, Guillermo Ortiz-Jiménez, Nicolas Papernot, Matthew Jagielski, Itay Yona, Heidi Howard, Eugene Bagdasaryan:
UnUnlearning: Unlearning is not sufficient for content regulation in advanced generative AI. CoRR abs/2407.00106 (2024) - [i96]David Glukhov, Ziwen Han, Ilia Shumailov, Vardan Papyan, Nicolas Papernot:
A False Sense of Safety: Unsafe Information Leakage in 'Safe' AI Responses. CoRR abs/2407.02551 (2024) - 2023
- [j7]Ali Shahin Shamsabadi, Brij Mohan Lal Srivastava, Aurélien Bellet, Nathalie Vauquier, Emmanuel Vincent, Mohamed Maouche, Marc Tommasi, Nicolas Papernot:
Differentially Private Speaker Anonymization. Proc. Priv. Enhancing Technol. 2023(1): 98-114 (2023) - [j6]Adam Dziedzic, Christopher A. Choquette-Choo, Natalie Dullerud, Vinith M. Suriyakumar, Ali Shahin Shamsabadi, Muhammad Ahmad Kaleem, Somesh Jha, Nicolas Papernot, Xiao Wang:
Private Multi-Winner Voting for Machine Learning. Proc. Priv. Enhancing Technol. 2023(1): 527-555 (2023) - [j5]Ali Shahin Shamsabadi, Nicolas Papernot:
Losing Less: A Loss for Differentially Private Deep Learning. Proc. Priv. Enhancing Technol. 2023(3): 307-320 (2023) - [c68]Dudi Biton, Aditi Misra, Efrat Levy, Jaidip Kotak, Ron Bitton, Roei Schuster, Nicolas Papernot, Yuval Elovici, Ben Nassi:
The Adversarial Implications of Variable-Time Inference. AISec@CCS 2023: 103-114 - [c67]Mikel Bober-Irizar, Ilia Shumailov, Yiren Zhao, Robert D. Mullins, Nicolas Papernot:
Architectural Backdoors in Neural Networks. CVPR 2023: 24595-24604 - [c66]Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot:
When the Curious Abandon Honesty: Federated Learning Is Not Private. EuroS&P 2023: 175-199 - [c65]Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot:
Reconstructing Individual Data Points in Federated Learning Hardened with Differential Privacy and Secure Aggregation. EuroS&P 2023: 241-257 - [c64]Congyu Fang, Hengrui Jia, Anvith Thudi, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Varun Chandrasekaran, Nicolas Papernot:
Proof-of-Learning is Currently More Broken Than You Think. EuroS&P 2023: 797-816 - [c63]Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Guha Thakurta, Nicolas Papernot, Chiyuan Zhang:
Measuring Forgetting of Memorized Training Examples. ICLR 2023 - [c62]Ali Shahin Shamsabadi, Sierra Calanda Wyllie, Nicholas Franzese, Natalie Dullerud, Sébastien Gambs, Nicolas Papernot, Xiao Wang, Adrian Weller:
Confidential-PROFITT: Confidential PROof of FaIr Training of Trees. ICLR 2023 - [c61]Franziska Boenisch, Christopher Mühl, Adam Dziedzic, Roy Rinberg, Nicolas Papernot:
Have it your way: Individualized Privacy Assignment for DP-SGD. NeurIPS 2023 - [c60]Haonan Duan, Adam Dziedzic, Nicolas Papernot, Franziska Boenisch:
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models. NeurIPS 2023 - [c59]Nicholas Franzese, Adam Dziedzic, Christopher A. Choquette-Choo, Mark R. Thomas, Muhammad Ahmad Kaleem, Stephan Rabanser, Congyu Fang, Somesh Jha, Nicolas Papernot, Xiao Wang:
Robust and Actively Secure Serverless Collaborative Learning. NeurIPS 2023 - [c58]Stephan Rabanser, Anvith Thudi, Abhradeep Guha Thakurta, Krishnamurthy Dvijotham, Nicolas Papernot:
Training Private Models That Know What They Don't Know. NeurIPS 2023 - [c57]Shimaa Ahmed, Yash Wani, Ali Shahin Shamsabadi, Mohammad Yaghini, Ilia Shumailov, Nicolas Papernot, Kassem Fawaz:
Tubes Among Us: Analog Attack on Automatic Speaker Identification. USENIX Security Symposium 2023: 265-282 - [i95]Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot:
Is Federated Learning a Practical PET Yet? CoRR abs/2301.04017 (2023) - [i94]Mohammad Yaghini, Patty Liu, Franziska Boenisch, Nicolas Papernot:
Learning with Impartiality to Walk on the Pareto Frontier of Fairness, Privacy, and Utility. CoRR abs/2302.09183 (2023) - [i93]Franziska Boenisch, Christopher Mühl, Adam Dziedzic, Roy Rinberg, Nicolas Papernot:
Have it your way: Individualized Privacy Assignment for DP-SGD. CoRR abs/2303.17046 (2023) - [i92]Rachel Cummings, Damien Desfontaines, David Evans, Roxana Geambasu, Matthew Jagielski, Yangsibo Huang, Peter Kairouz, Gautam Kamath, Sewoong Oh, Olga Ohrimenko, Nicolas Papernot, Ryan Rogers, Milan Shen, Shuang Song, Weijie J. Su, Andreas Terzis, Abhradeep Thakurta, Sergei Vassilvitskii, Yu-Xiang Wang, Li Xiong, Sergey Yekhanin, Da Yu, Huanyu Zhang, Wanrong Zhang:
Challenges towards the Next Frontier in Privacy. CoRR abs/2304.06929 (2023) - [i91]Haonan Duan, Adam Dziedzic, Nicolas Papernot, Franziska Boenisch:
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models. CoRR abs/2305.15594 (2023) - [i90]Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, Ross J. Anderson:
The Curse of Recursion: Training on Generated Data Makes Models Forget. CoRR abs/2305.17493 (2023) - [i89]Stephan Rabanser, Anvith Thudi, Abhradeep Thakurta, Krishnamurthy Dvijotham, Nicolas Papernot:
Training Private Models That Know What They Don't Know. CoRR abs/2305.18393 (2023) - [i88]Nicholas Boucher, Jenny Blessing, Ilia Shumailov, Ross Anderson, Nicolas Papernot:
When Vision Fails: Text Attacks Against ViT and OCR. CoRR abs/2306.07033 (2023) - [i87]Jiapeng Wu, Atiyeh Ashari Ghomi, David Glukhov, Jesse C. Cresswell, Franziska Boenisch, Nicolas Papernot:
Augment then Smooth: Reconciling Differential Privacy with Certified Robustness. CoRR abs/2306.08656 (2023) - [i86]Anvith Thudi, Hengrui Jia, Casey Meehan, Ilia Shumailov, Nicolas Papernot:
Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD. CoRR abs/2307.00310 (2023) - [i85]David Glukhov, Ilia Shumailov, Yarin Gal, Nicolas Papernot, Vardan Papyan:
LLM Censorship: A Machine Learning Challenge or a Computer Security Problem? CoRR abs/2307.10719 (2023) - [i84]Dudi Biton, Aditi Misra, Efrat Levy, Jaidip Kotak, Ron Bitton, Roei Schuster, Nicolas Papernot, Yuval Elovici, Ben Nassi:
The Adversarial Implications of Variable-Time Inference. CoRR abs/2309.02159 (2023) - [i83]Avital Shafran, Ilia Shumailov, Murat A. Erdogdu, Nicolas Papernot:
Beyond Labeling Oracles: What does it mean to steal ML models? CoRR abs/2310.01959 (2023) - [i82]Olive Franzese, Adam Dziedzic, Christopher A. Choquette-Choo, Mark R. Thomas, Muhammad Ahmad Kaleem, Stephan Rabanser, Congyu Fang, Somesh Jha, Nicolas Papernot, Xiao Wang:
Robust and Actively Secure Serverless Collaborative Learning. CoRR abs/2310.16678 (2023) - 2022
- [j4]Ryan Sheatsley, Nicolas Papernot, Michael J. Weisman, Gunjan Verma, Patrick D. McDaniel:
Adversarial examples for network intrusion detection systems. J. Comput. Secur. 30(5): 727-752 (2022) - [c56]Nicolas Papernot:
The Role of Randomization in Trustworthy Machine Learning. MTD@CCS 2022: 23-24 - [c55]Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, Nicolas Papernot:
Unrolling SGD: Understanding Factors Influencing Machine Unlearning. EuroS&P 2022: 303-319 - [c54]Natalie Dullerud, Karsten Roth, Kimia Hamidieh, Nicolas Papernot, Marzyeh Ghassemi:
Is Fairness Only Metric Deep? Evaluating and Addressing Subgroup Gaps in Deep Metric Learning. ICLR 2022 - [c53]Adam Dziedzic, Muhammad Ahmad Kaleem, Yu Shen Lu, Nicolas Papernot:
Increasing the Cost of Model Extraction with Calibrated Proof of Work. ICLR 2022 - [c52]Hengrui Jia, Hongyu Chen, Jonas Guan, Ali Shahin Shamsabadi, Nicolas Papernot:
A Zest of LIME: Towards Architecture-Independent Model Distances. ICLR 2022 - [c51]Nicolas Papernot, Thomas Steinke:
Hyperparameter Tuning with Renyi Differential Privacy. ICLR 2022 - [c50]Adam Dziedzic, Nikita Dhawan, Muhammad Ahmad Kaleem, Jonas Guan, Nicolas Papernot:
On the Difficulty of Defending Self-Supervised Learning against Model Extraction. ICML 2022: 5757-5776 - [c49]Yue Gao, Ilia Shumailov, Kassem Fawaz, Nicolas Papernot:
On the Limitations of Stochastic Pre-processing Defenses. NeurIPS 2022 - [c48]Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramèr:
The Privacy Onion Effect: Memorization is Relative. NeurIPS 2022 - [c47]Adam Dziedzic, Haonan Duan, Muhammad Ahmad Kaleem, Nikita Dhawan, Jonas Guan, Yannis Cattan, Franziska Boenisch, Nicolas Papernot:
Dataset Inference for Self-Supervised Models. NeurIPS 2022 - [c46]Ali Shahin Shamsabadi, Mohammad Yaghini, Natalie Dullerud, Sierra Calanda Wyllie, Ulrich Aïvodji, Aisha Alaagib, Sébastien Gambs, Nicolas Papernot:
Washing The Unwashable : On The (Im)possibility of Fairwashing Detection. NeurIPS 2022 - [c45]Jiaqi Wang, Roei Schuster, Ilia Shumailov, David Lie, Nicolas Papernot:
In Differential Privacy, There is Truth: on Vote-Histogram Leakage in Ensemble Private Learning. NeurIPS 2022 - [c44]Nicholas Boucher, Ilia Shumailov, Ross Anderson, Nicolas Papernot:
Bad Characters: Imperceptible NLP Attacks. SP 2022: 1987-2004 - [c43]Shimaa Ahmed, Ilia Shumailov, Nicolas Papernot, Kassem Fawaz:
Towards More Robust Keyword Spotting for Voice Assistants. USENIX Security Symposium 2022: 2655-2672 - [c42]Anvith Thudi, Hengrui Jia, Ilia Shumailov, Nicolas Papernot:
On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning. USENIX Security Symposium 2022: 4007-4022 - [i81]Adam Dziedzic, Muhammad Ahmad Kaleem, Yu Shen Lu, Nicolas Papernot:
Increasing the Cost of Model Extraction with Calibrated Proof of Work. CoRR abs/2201.09243 (2022) - [i80]Shimaa Ahmed, Yash Wani, Ali Shahin Shamsabadi, Mohammad Yaghini, Ilia Shumailov, Nicolas Papernot, Kassem Fawaz:
Pipe Overflow: Smashing Voice Authentication for Fun and Profit. CoRR abs/2202.02751 (2022) - [i79]Ali Shahin Shamsabadi, Brij Mohan Lal Srivastava, Aurélien Bellet, Nathalie Vauquier, Emmanuel Vincent, Mohamed Maouche, Marc Tommasi, Nicolas Papernot:
Differentially Private Speaker Anonymization. CoRR abs/2202.11823 (2022) - [i78]Anvith Thudi, Ilia Shumailov, Franziska Boenisch, Nicolas Papernot:
Bounding Membership Inference. CoRR abs/2202.12232 (2022) - [i77]Natalie Dullerud, Karsten Roth, Kimia Hamidieh, Nicolas Papernot, Marzyeh Ghassemi:
Is Fairness Only Metric Deep? Evaluating and Addressing Subgroup Gaps in Deep Metric Learning. CoRR abs/2203.12748 (2022) - [i76]Adam Dziedzic, Nikita Dhawan, Muhammad Ahmad Kaleem, Jonas Guan, Nicolas Papernot:
On the Difficulty of Defending Self-Supervised Learning against Model Extraction. CoRR abs/2205.07890 (2022) - [i75]Stephan Rabanser, Anvith Thudi, Kimia Hamidieh, Adam Dziedzic, Nicolas Papernot:
Selective Classification Via Neural Network Training Dynamics. CoRR abs/2205.13532 (2022) - [i74]Mikel Bober-Irizar, Ilia Shumailov, Yiren Zhao, Robert D. Mullins, Nicolas Papernot:
Architectural Backdoors in Neural Networks. CoRR abs/2206.07840 (2022) - [i73]Yue Gao, Ilia Shumailov, Kassem Fawaz, Nicolas Papernot:
On the Limitations of Stochastic Pre-processing Defenses. CoRR abs/2206.09491 (2022) - [i72]Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramèr:
The Privacy Onion Effect: Memorization is Relative. CoRR abs/2206.10469 (2022) - [i71]Stephan Rabanser, Tim Januschowski, Kashif Rasul, Oliver Borchert, Richard Kurle, Jan Gasthaus, Michael Bohlke-Schneider, Nicolas Papernot, Valentin Flunkert:
Intrinsic Anomaly Detection for Multi-Variate Time Series. CoRR abs/2206.14342 (2022) - [i70]Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Chiyuan Zhang:
Measuring Forgetting of Memorized Training Examples. CoRR abs/2207.00099 (2022) - [i69]Maximilian Kaufmann, Yiren Zhao, Ilia Shumailov, Robert D. Mullins, Nicolas Papernot:
Efficient Adversarial Training With Data Pruning. CoRR abs/2207.00694 (2022) - [i68]Adam Dziedzic, Stephan Rabanser, Mohammad Yaghini, Armin Ale, Murat A. Erdogdu, Nicolas Papernot:
p-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations. CoRR abs/2207.12545 (2022) - [i67]Tejumade Afonja, Lucas Bourtoule, Varun Chandrasekaran, Sageev Oore, Nicolas Papernot:
Generative Extraction of Audio Classifiers for Speaker Identification. CoRR abs/2207.12816 (2022) - [i66]Congyu Fang, Hengrui Jia, Anvith Thudi, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Varun Chandrasekaran, Nicolas Papernot:
On the Fundamental Limits of Formally (Dis)Proving Robustness in Proof-of-Learning. CoRR abs/2208.03567 (2022) - [i65]Adam Dziedzic, Haonan Duan, Muhammad Ahmad Kaleem, Nikita Dhawan, Jonas Guan, Yannis Cattan, Franziska Boenisch, Nicolas Papernot:
Dataset Inference for Self-Supervised Models. CoRR abs/2209.09024 (2022) - [i64]Jiaqi Wang, Roei Schuster, Ilia Shumailov, David Lie, Nicolas Papernot:
In Differential Privacy, There is Truth: On Vote Leakage in Ensemble Private Learning. CoRR abs/2209.10732 (2022) - [i63]Yannis Cattan, Christopher A. Choquette-Choo, Nicolas Papernot, Abhradeep Thakurta:
Fine-Tuning with Differential Privacy Necessitates an Additional Hyperparameter Search. CoRR abs/2210.02156 (2022) - [i62]Thorsten Eisenhofer, Doreen Riepel, Varun Chandrasekaran, Esha Ghosh, Olga Ohrimenko, Nicolas Papernot:
Verifiable and Provably Secure Machine Unlearning. CoRR abs/2210.09126 (2022) - [i61]Adam Dziedzic, Christopher A. Choquette-Choo, Natalie Dullerud, Vinith Menon Suriyakumar, Ali Shahin Shamsabadi, Muhammad Ahmad Kaleem, Somesh Jha, Nicolas Papernot, Xiao Wang:
Private Multi-Winner Voting for Machine Learning. CoRR abs/2211.15410 (2022) - [i60]Roei Schuster, Jin Peng Zhou, Thorsten Eisenhofer, Paul Grubbs, Nicolas Papernot:
Learned Systems Security. CoRR abs/2212.10318 (2022) - 2021
- [c41]Nicolas Papernot, Abhradeep Thakurta, Shuang Song, Steve Chien, Úlfar Erlingsson:
Tempered Sigmoid Activations for Deep Learning with Differential Privacy. AAAI 2021: 9312-9321 - [c40]Jean-Baptiste Truong, Pratyush Maini, Robert J. Walls, Nicolas Papernot:
Data-Free Model Extraction. CVPR 2021: 4771-4780 - [c39]Hui Xu, Guanpeng Li, Homa Alemzadeh, Rakesh Bobba, Varun Chandrasekaran, David E. Evans, Nicolas Papernot, Karthik Pattabiraman, Florian Tramèr:
Fourth International Workshop on Dependable and Secure Machine Learning - DSML 2021. DSN Workshops 2021: xvi - [c38]Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert D. Mullins, Ross Anderson:
Sponge Examples: Energy-Latency Attacks on Neural Networks. EuroS&P 2021: 212-231 - [c37]Vinith M. Suriyakumar, Nicolas Papernot, Anna Goldenberg, Marzyeh Ghassemi:
Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings. FAccT 2021: 723-734 - [c36]Christopher A. Choquette-Choo, Natalie Dullerud, Adam Dziedzic, Yunxiang Zhang, Somesh Jha, Nicolas Papernot, Xiao Wang:
CaPC Learning: Confidential and Private Collaborative Learning. ICLR 2021 - [c35]Pratyush Maini, Mohammad Yaghini, Nicolas Papernot:
Dataset Inference: Ownership Resolution in Machine Learning. ICLR 2021 - [c34]Christopher A. Choquette-Choo, Florian Tramèr, Nicholas Carlini, Nicolas Papernot:
Label-Only Membership Inference Attacks. ICML 2021: 1964-1974 - [c33]David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, Ross J. Anderson:
Markpainting: Adversarial Machine Learning meets Inpainting. ICML 2021: 5409-5419 - [c32]Mingyue Yang, David Lie, Nicolas Papernot:
Accelerating Symbolic Analysis for Android Apps. ASE Workshops 2021: 47-52 - [c31]Ilia Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A. Erdogdu, Ross J. Anderson:
Manipulating SGD with Data Ordering Attacks. NeurIPS 2021: 18021-18032 - [c30]Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, Nicolas Papernot:
Machine Unlearning. SP 2021: 141-159 - [c29]Hadi Abdullah, Kevin Warren, Vincent Bindschaedler, Nicolas Papernot, Patrick Traynor:
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems. SP 2021: 730-747 - [c28]Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Nicholas Carlini:
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. SP 2021: 866-882 - [c27]Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, Nicolas Papernot:
Proof-of-Learning: Definitions and Practice. SP 2021: 1039-1056 - [c26]Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran, Nicolas Papernot:
Entangled Watermarks as a Defense against Model Extraction. USENIX Security Symposium 2021: 1937-1954 - [i59]Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Nicholas Carlini:
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. CoRR abs/2101.04535 (2021) - [i58]Christopher A. Choquette-Choo, Natalie Dullerud, Adam Dziedzic, Yunxiang Zhang, Somesh Jha, Nicolas Papernot, Xiao Wang:
CaPC Learning: Confidential and Private Collaborative Learning. CoRR abs/2102.05188 (2021) - [i57]Hengrui Jia, Mohammad Yaghini, Christopher A. Choquette-Choo, Natalie Dullerud, Anvith Thudi, Varun Chandrasekaran, Nicolas Papernot:
Proof-of-Learning: Definitions and Practice. CoRR abs/2103.05633 (2021) - [i56]Ilia Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A. Erdogdu, Ross J. Anderson:
Manipulating SGD with Data Ordering Attacks. CoRR abs/2104.09667 (2021) - [i55]Pratyush Maini, Mohammad Yaghini, Nicolas Papernot:
Dataset Inference: Ownership Resolution in Machine Learning. CoRR abs/2104.10706 (2021) - [i54]David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, Ross J. Anderson:
Markpainting: Adversarial Machine Learning meets Inpainting. CoRR abs/2106.00660 (2021) - [i53]Nicholas Boucher, Ilia Shumailov, Ross J. Anderson, Nicolas Papernot:
Bad Characters: Imperceptible NLP Attacks. CoRR abs/2106.09898 (2021) - [i52]Adelin Travers, Lorna Licollari, Guanghan Wang, Varun Chandrasekaran, Adam Dziedzic, David Lie, Nicolas Papernot:
On the Exploitability of Audio Machine Learning Pipelines to Surreptitious Adversarial Examples. CoRR abs/2108.02010 (2021) - [i51]Varun Chandrasekaran, Hengrui Jia, Anvith Thudi, Adelin Travers, Mohammad Yaghini, Nicolas Papernot:
SoK: Machine Learning Governance. CoRR abs/2109.10870 (2021) - [i50]Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, Nicolas Papernot:
Unrolling SGD: Understanding Factors Influencing Machine Unlearning. CoRR abs/2109.13398 (2021) - [i49]Gabriel Deza, Adelin Travers, Colin Rowat, Nicolas Papernot:
Interpretability in Safety-Critical FinancialTrading Systems. CoRR abs/2109.15112 (2021) - [i48]Nicolas Papernot, Thomas Steinke:
Hyperparameter Tuning with Renyi Differential Privacy. CoRR abs/2110.03620 (2021) - [i47]Anvith Thudi, Hengrui Jia, Ilia Shumailov, Nicolas Papernot:
On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning. CoRR abs/2110.11891 (2021) - [i46]Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot:
When the Curious Abandon Honesty: Federated Learning Is Not Private. CoRR abs/2112.02918 (2021) - 2020
- [c25]Homa Alemzadeh, Rakesh Bobba, Varun Chandrasekaran, David E. Evans, Nicolas Papernot, Karthik Pattabiraman, Florian Tramèr:
Third International Workshop on Dependable and Secure Machine Learning - DSML 2020. DSN Workshops 2020: x - [c24]Andrew Boutros, Mathew Hall, Nicolas Papernot, Vaughn Betz:
Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators on Multi-Tenant FPGAs. FPT 2020: 103-111 - [c23]Kalpesh Krishna, Gaurav Singh Tomar, Ankur P. Parikh, Nicolas Papernot, Mohit Iyyer:
Thieves on Sesame Street! Model Extraction of BERT-based APIs. ICLR 2020 - [c22]Florian Tramèr, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jörn-Henrik Jacobsen:
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. ICML 2020: 9561-9571 - [c21]Jieyu Lin, Kristina Dzeparoska, Sai Qian Zhang, Alberto Leon-Garcia, Nicolas Papernot:
On the Robustness of Cooperative Multi-Agent Reinforcement Learning. SP (Workshops) 2020: 62-68 - [c20]Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot:
High Accuracy and High Fidelity Extraction of Neural Networks. USENIX Security Symposium 2020: 1345-1362 - [i45]Florian Tramèr, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jörn-Henrik Jacobsen:
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations. CoRR abs/2002.04599 (2020) - [i44]Sanghyun Hong, Varun Chandrasekaran, Yigitcan Kaya, Tudor Dumitras, Nicolas Papernot:
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping. CoRR abs/2002.11497 (2020) - [i43]Hengrui Jia, Christopher A. Choquette-Choo, Nicolas Papernot:
Entangled Watermarks as a Defense against Model Extraction. CoRR abs/2002.12200 (2020) - [i42]Jieyu Lin, Kristina Dzeparoska, Sai Qian Zhang, Alberto Leon-Garcia, Nicolas Papernot:
On the Robustness of Cooperative Multi-Agent Reinforcement Learning. CoRR abs/2003.03722 (2020) - [i41]Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert D. Mullins, Ross J. Anderson:
Sponge Examples: Energy-Latency Attacks on Neural Networks. CoRR abs/2006.03463 (2020) - [i40]Hadi Abdullah, Kevin Warren, Vincent Bindschaedler, Nicolas Papernot, Patrick Traynor:
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems. CoRR abs/2007.06622 (2020) - [i39]Nicolas Papernot, Abhradeep Thakurta, Shuang Song, Steve Chien, Úlfar Erlingsson:
Tempered Sigmoid Activations for Deep Learning with Differential Privacy. CoRR abs/2007.14191 (2020) - [i38]Christopher A. Choquette-Choo, Florian Tramèr, Nicholas Carlini, Nicolas Papernot:
Label-Only Membership Inference Attacks. CoRR abs/2007.14321 (2020) - [i37]Baiwu Zhang, Jin Peng Zhou, Ilia Shumailov, Nicolas Papernot:
Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media. CoRR abs/2008.09194 (2020) - [i36]Vinith M. Suriyakumar, Nicolas Papernot, Anna Goldenberg, Marzyeh Ghassemi:
Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings. CoRR abs/2010.06667 (2020) - [i35]Ryan Sheatsley, Nicolas Papernot, Michael J. Weisman, Gunjan Verma, Patrick D. McDaniel:
Adversarial Examples in Constrained Domains. CoRR abs/2011.01183 (2020) - [i34]Jean-Baptiste Truong, Pratyush Maini, Robert J. Walls, Nicolas Papernot:
Data-Free Model Extraction. CoRR abs/2011.14779 (2020) - [i33]Andrew Boutros, Mathew Hall, Nicolas Papernot, Vaughn Betz:
Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators on Multi-Tenant FPGAs. CoRR abs/2012.07242 (2020)
2010 – 2019
- 2019
- [j3]Dan Boneh, Andrew J. Grotto, Patrick D. McDaniel, Nicolas Papernot:
How Relevant Is the Turing Test in the Age of Sophisbots? IEEE Secur. Priv. 17(6): 64-71 (2019) - [c19]Nicholas Frosst, Nicolas Papernot, Geoffrey E. Hinton:
Analyzing and Improving Representations with the Soft Nearest Neighbor Loss. ICML 2019: 2012-2020 - [c18]David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel:
MixMatch: A Holistic Approach to Semi-Supervised Learning. NeurIPS 2019: 5050-5060 - [i32]Nicholas Frosst, Nicolas Papernot, Geoffrey E. Hinton:
Analyzing and Improving Representations with the Soft Nearest Neighbor Loss. CoRR abs/1902.01889 (2019) - [i31]Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian J. Goodfellow, Aleksander Madry, Alexey Kurakin:
On Evaluating Adversarial Robustness. CoRR abs/1902.06705 (2019) - [i30]Jörn-Henrik Jacobsen, Jens Behrmann, Nicholas Carlini, Florian Tramèr, Nicolas Papernot:
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness. CoRR abs/1903.10484 (2019) - [i29]David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel:
MixMatch: A Holistic Approach to Semi-Supervised Learning. CoRR abs/1905.02249 (2019) - [i28]Dan Boneh, Andrew J. Grotto, Patrick D. McDaniel, Nicolas Papernot:
How Relevant is the Turing Test in the Age of Sophisbots? CoRR abs/1909.00056 (2019) - [i27]Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot:
High-Fidelity Extraction of Neural Network Models. CoRR abs/1909.01838 (2019) - [i26]Zhengli Zhao, Nicolas Papernot, Sameer Singh, Neoklis Polyzotis, Augustus Odena:
Improving Differentially Private Models with Active Learning. CoRR abs/1910.01177 (2019) - [i25]Kalpesh Krishna, Gaurav Singh Tomar, Ankur P. Parikh, Nicolas Papernot, Mohit Iyyer:
Thieves on Sesame Street! Model Extraction of BERT-based APIs. CoRR abs/1910.12366 (2019) - [i24]Nicholas Carlini, Úlfar Erlingsson, Nicolas Papernot:
Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications. CoRR abs/1910.13427 (2019) - [i23]Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, Nicolas Papernot:
Machine Unlearning. CoRR abs/1912.03817 (2019) - 2018
- [j2]Ian J. Goodfellow, Patrick D. McDaniel, Nicolas Papernot:
Making machine learning robust against adversarial inputs. Commun. ACM 61(7): 56-66 (2018) - [c17]Nicolas Papernot:
A Marauder's Map of Security and Privacy in Machine Learning: An overview of current and future research directions for making machine learning secure and private. AISec@CCS 2018: 1 - [c16]Z. Berkay Celik, Patrick D. McDaniel, Rauf Izmailov, Nicolas Papernot, Ryan Sheatsley, Raquel Alvarez, Ananthram Swami:
Detection under Privileged Information. AsiaCCS 2018: 199-206 - [c15]Nicolas Papernot, Patrick D. McDaniel, Arunesh Sinha, Michael P. Wellman:
SoK: Security and Privacy in Machine Learning. EuroS&P 2018: 399-414 - [c14]Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Úlfar Erlingsson:
Scalable Private Learning with PATE. ICLR 2018 - [c13]Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian J. Goodfellow, Dan Boneh, Patrick D. McDaniel:
Ensemble Adversarial Training: Attacks and Defenses. ICLR (Poster) 2018 - [c12]Gamaleldin F. Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alexey Kurakin, Ian J. Goodfellow, Jascha Sohl-Dickstein:
Adversarial Examples that Fool both Computer Vision and Time-Limited Humans. NeurIPS 2018: 3914-3924 - [i22]Gamaleldin F. Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian J. Goodfellow, Jascha Sohl-Dickstein:
Adversarial Examples that Fool both Human and Computer Vision. CoRR abs/1802.08195 (2018) - [i21]Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Úlfar Erlingsson:
Scalable Private Learning with PATE. CoRR abs/1802.08908 (2018) - [i20]Nicolas Papernot, Patrick D. McDaniel:
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning. CoRR abs/1803.04765 (2018) - [i19]Wieland Brendel, Jonas Rauber, Alexey Kurakin, Nicolas Papernot, Behar Veliqi, Marcel Salathé, Sharada P. Mohanty, Matthias Bethge:
Adversarial Vision Challenge. CoRR abs/1808.01976 (2018) - [i18]Nicolas Papernot:
A Marauder's Map of Security and Privacy in Machine Learning. CoRR abs/1811.01134 (2018) - 2017
- [c11]Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram Swami:
Practical Black-Box Attacks against Machine Learning. AsiaCCS 2017: 506-519 - [c10]Martín Abadi, Úlfar Erlingsson, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang:
On the Protection of Private Information in Machine Learning Systems: Two Recent Approches. CSF 2017: 1-6 - [c9]Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, Patrick D. McDaniel:
Adversarial Examples for Malware Detection. ESORICS (2) 2017: 62-79 - [c8]Sandy H. Huang, Nicolas Papernot, Ian J. Goodfellow, Yan Duan, Pieter Abbeel:
Adversarial Attacks on Neural Network Policies. ICLR (Workshop) 2017 - [c7]Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian J. Goodfellow, Kunal Talwar:
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data. ICLR 2017 - [i17]Sandy H. Huang, Nicolas Papernot, Ian J. Goodfellow, Yan Duan, Pieter Abbeel:
Adversarial Attacks on Neural Network Policies. CoRR abs/1702.02284 (2017) - [i16]Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, Patrick D. McDaniel:
On the (Statistical) Detection of Adversarial Examples. CoRR abs/1702.06280 (2017) - [i15]Florian Tramèr, Nicolas Papernot, Ian J. Goodfellow, Dan Boneh, Patrick D. McDaniel:
The Space of Transferable Adversarial Examples. CoRR abs/1704.03453 (2017) - [i14]Nicolas Papernot, Patrick D. McDaniel:
Extending Defensive Distillation. CoRR abs/1705.05264 (2017) - [i13]Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Dan Boneh, Patrick D. McDaniel:
Ensemble Adversarial Training: Attacks and Defenses. CoRR abs/1705.07204 (2017) - [i12]Martín Abadi, Úlfar Erlingsson, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang:
On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches. CoRR abs/1708.08022 (2017) - 2016
- [j1]Patrick D. McDaniel, Nicolas Papernot, Z. Berkay Celik:
Machine Learning in Adversarial Settings. IEEE Secur. Priv. 14(3): 68-72 (2016) - [c6]Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, Ananthram Swami:
The Limitations of Deep Learning in Adversarial Settings. EuroS&P 2016: 372-387 - [c5]Z. Berkay Celik, Nan Hu, Yun Li, Nicolas Papernot, Patrick D. McDaniel, Robert J. Walls, Jeff Rowe, Karl N. Levitt, Novella Bartolini, Thomas F. La Porta, Ritu Chadha:
Mapping sample scenarios to operational models. MILCOM 2016: 7-12 - [c4]Nicolas Papernot, Patrick D. McDaniel, Ananthram Swami, Richard E. Harang:
Crafting adversarial input sequences for recurrent neural networks. MILCOM 2016: 49-54 - [c3]Nicolas Papernot, Patrick D. McDaniel, Xi Wu, Somesh Jha, Ananthram Swami:
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. IEEE Symposium on Security and Privacy 2016: 582-597 - [i11]Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram Swami:
Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples. CoRR abs/1602.02697 (2016) - [i10]Z. Berkay Celik, Patrick D. McDaniel, Rauf Izmailov, Nicolas Papernot, Ananthram Swami:
Building Better Detection with Privileged Information. CoRR abs/1603.09638 (2016) - [i9]Nicolas Papernot, Patrick D. McDaniel, Ananthram Swami, Richard E. Harang:
Crafting Adversarial Input Sequences for Recurrent Neural Networks. CoRR abs/1604.08275 (2016) - [i8]Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow:
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. CoRR abs/1605.07277 (2016) - [i7]Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, Patrick D. McDaniel:
Adversarial Perturbations Against Deep Neural Networks for Malware Classification. CoRR abs/1606.04435 (2016) - [i6]Nicolas Papernot, Patrick D. McDaniel:
On the Effectiveness of Defensive Distillation. CoRR abs/1607.05113 (2016) - [i5]Ian J. Goodfellow, Nicolas Papernot, Patrick D. McDaniel:
cleverhans v0.1: an adversarial machine learning library. CoRR abs/1610.00768 (2016) - [i4]Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian J. Goodfellow, Kunal Talwar:
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data. CoRR abs/1610.05755 (2016) - [i3]Nicolas Papernot, Patrick D. McDaniel, Arunesh Sinha, Michael P. Wellman:
Towards the Science of Security and Privacy in Machine Learning. CoRR abs/1611.03814 (2016) - 2015
- [c2]Nicolas Papernot, Patrick D. McDaniel, Robert J. Walls:
Enforcing agile access control policies in relational databases using views. MILCOM 2015: 7-12 - [i2]Nicolas Papernot, Patrick D. McDaniel, Xi Wu, Somesh Jha, Ananthram Swami:
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. CoRR abs/1511.04508 (2015) - [i1]Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, Ananthram Swami:
The Limitations of Deep Learning in Adversarial Settings. CoRR abs/1511.07528 (2015) - 2014
- [c1]Patrick D. McDaniel, Trent Jaeger, Thomas F. La Porta, Nicolas Papernot, Robert J. Walls, Alexander Kott, Lisa M. Marvel, Ananthram Swami, Prasant Mohapatra, Srikanth V. Krishnamurthy, Iulian Neamtiu:
Security and Science of Agility. MTD@CCS 2014: 13-19
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-10 20:47 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint