default search action
Niloofar Mireshghallah
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c23]Niloofar Mireshghallah, Justus Mattern, Sicun Gao, Reza Shokri, Taylor Berg-Kirkpatrick:
Smaller Language Models are Better Zero-shot Machine-Generated Text Detectors. EACL (2) 2024: 278-293 - [c22]Tong Chen, Akari Asai, Niloofar Mireshghallah, Sewon Min, James Grimmelmann, Yejin Choi, Hannaneh Hajishirzi, Luke Zettlemoyer, Pang Wei Koh:
CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation. EMNLP 2024: 15134-15158 - [c21]Niloofar Mireshghallah, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, Yejin Choi:
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory. ICLR 2024 - [c20]Xinyu Tang, Richard Shin, Huseyin A. Inan, Andre Manoel, Fatemehsadat Mireshghallah, Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Robert Sim:
Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation. ICLR 2024 - [c19]Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell L. Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, Yejin Choi:
Position: A Roadmap to Pluralistic Alignment. ICML 2024 - [c18]Mengke Zhang, Tianxing He, Tianle Wang, Lu Mi, Niloofar Mireshghallah, Binyi Chen, Hao Wang, Yulia Tsvetkov:
LatticeGen: Hiding Generated Text in a Lattice for Privacy-Aware Large Language Model Generation on Cloud. NAACL-HLT (Findings) 2024: 2674-2690 - [i39]Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell L. Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, Yejin Choi:
A Roadmap to Pluralistic Alignment. CoRR abs/2402.05070 (2024) - [i38]Michael Duan, Anshuman Suri, Niloofar Mireshghallah, Sewon Min, Weijia Shi, Luke Zettlemoyer, Yulia Tsvetkov, Yejin Choi, David Evans, Hannaneh Hajishirzi:
Do Membership Inference Attacks Work on Large Language Models? CoRR abs/2402.07841 (2024) - [i37]Aly M. Kassem, Omar Mahmoud, Niloofar Mireshghallah, Hyunwoo Kim, Yulia Tsvetkov, Yejin Choi, Sherif Saad, Santu Rana:
Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs. CoRR abs/2403.04801 (2024) - [i36]Natalie Grace Brigham, Chongjiu Gao, Tadayoshi Kohno, Franziska Roesner, Niloofar Mireshghallah:
Breaking News: Case Studies of Generative AI's Use in Journalism. CoRR abs/2406.13706 (2024) - [i35]Liwei Jiang, Kavel Rao, Seungju Han, Allyson Ettinger, Faeze Brahman, Sachin Kumar, Niloofar Mireshghallah, Ximing Lu, Maarten Sap, Yejin Choi, Nouha Dziri:
WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models. CoRR abs/2406.18510 (2024) - [i34]Tong Chen, Akari Asai, Niloofar Mireshghallah, Sewon Min, James Grimmelmann, Yejin Choi, Hannaneh Hajishirzi, Luke Zettlemoyer, Pang Wei Koh:
CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation. CoRR abs/2407.07087 (2024) - [i33]Niloofar Mireshghallah, Maria Antoniak, Yash More, Yejin Choi, Golnoosh Farnadi:
Trust No Bot: Discovering Personal Disclosures in Human-LLM Conversations in the Wild. CoRR abs/2407.11438 (2024) - [i32]Xuhui Zhou, Hyunwoo Kim, Faeze Brahman, Liwei Jiang, Hao Zhu, Ximing Lu, Frank Xu, Bill Yuchen Lin, Yejin Choi, Niloofar Mireshghallah, Ronan Le Bras, Maarten Sap:
HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions. CoRR abs/2409.16427 (2024) - [i31]Ximing Lu, Melanie Sclar, Skyler Hallinan, Niloofar Mireshghallah, Jiacheng Liu, Seungju Han, Allyson Ettinger, Liwei Jiang, Khyathi Raghavi Chandu, Nouha Dziri, Yejin Choi:
AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text. CoRR abs/2410.04265 (2024) - [i30]Ivoline C. Ngong, Joseph P. Near, Niloofar Mireshghallah:
Differentially Private Learning Needs Better Model Initialization and Self-Distillation. CoRR abs/2410.17566 (2024) - 2023
- [b1]Niloofar Mireshghallah:
Auditing and Mitigating Safety Risks in Large Language Models. University of California, San Diego, USA, 2023 - [c17]Fatemehsadat Mireshghallah, Yu Su, Tatsunori Hashimoto, Jason Eisner, Richard Shin:
Privacy-Preserving Domain Adaptation of Semantic Parsers. ACL (1) 2023: 4950-4970 - [c16]Justus Mattern, Fatemehsadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan, Taylor Berg-Kirkpatrick:
Membership Inference Attacks against Language Models via Neighbourhood Comparison. ACL (Findings) 2023: 11330-11343 - [c15]Jarad Forristal, Fatemehsadat Mireshghallah, Greg Durrett, Taylor Berg-Kirkpatrick:
A Block Metropolis-Hastings Sampler for Controllable Energy-based Text Generation. CoNLL 2023: 403-413 - [c14]Niloofar Mireshghallah, Nikolai Vogler, Junxian He, Omar Florez, Ahmed El-Kishky, Taylor Berg-Kirkpatrick:
Simple Temporal Adaptation to Changing Label Sets: Hashtag Prediction via Dense KNN. EMNLP 2023: 7302-7311 - [i29]Fatemehsadat Mireshghallah, Justus Mattern, Sicun Gao, Reza Shokri, Taylor Berg-Kirkpatrick:
Smaller Language Models are Better Black-box Machine-Generated Text Detectors. CoRR abs/2305.09859 (2023) - [i28]Aman Priyanshu, Supriti Vijay, Ayush Kumar, Rakshit Naidu, Fatemehsadat Mireshghallah:
Are Chatbots Ready for Privacy-Sensitive Applications? An Investigation into Input Regurgitation and Prompt-Induced Sanitization. CoRR abs/2305.15008 (2023) - [i27]Justus Mattern, Fatemehsadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan, Taylor Berg-Kirkpatrick:
Membership Inference Attacks against Language Models via Neighbourhood Comparison. CoRR abs/2305.18462 (2023) - [i26]Xinyu Tang, Richard Shin, Huseyin A. Inan, Andre Manoel, Fatemehsadat Mireshghallah, Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Robert Sim:
Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation. CoRR abs/2309.11765 (2023) - [i25]Mengke Zhang, Tianxing He, Tianle Wang, Lu Mi, Fatemehsadat Mireshghallah, Binyi Chen, Hao Wang, Yulia Tsvetkov:
LatticeGen: A Cooperative Framework which Hides Generated Text in a Lattice for Privacy-Aware Generation on Cloud. CoRR abs/2309.17157 (2023) - [i24]Xiaohan Fu, Zihan Wang, Shuheng Li, Rajesh K. Gupta, Niloofar Mireshghallah, Taylor Berg-Kirkpatrick, Earlence Fernandes:
Misusing Tools in Large Language Models With Visual Adversarial Examples. CoRR abs/2310.03185 (2023) - [i23]Niloofar Mireshghallah, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, Yejin Choi:
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory. CoRR abs/2310.17884 (2023) - [i22]A. Feder Cooper, Katherine Lee, James Grimmelmann, Daphne Ippolito, Christopher Callison-Burch, Christopher A. Choquette-Choo, Niloofar Mireshghallah, Miles Brundage, David Mimno, Madiha Zahrah Choksi, Jack M. Balkin, Nicholas Carlini, Christopher De Sa, Jonathan Frankle, Deep Ganguli, Bryant Gipson, Andres Guadamuz, Swee Leng Harris, Abigail Z. Jacobs, Elizabeth Joh, Gautam Kamath, Mark Lemley, Cass Matthews, Christine McLeavey, Corynne McSherry, Milad Nasr, Paul Ohm, Adam Roberts, Tom Rubin, Pamela Samuelson, Ludwig Schubert, Kristen Vaccaro, Luis Villa, Felix Wu, Elana Zeide:
Report of the 1st Workshop on Generative AI and Law. CoRR abs/2311.06477 (2023) - [i21]Jarad Forristal, Niloofar Mireshghallah, Greg Durrett, Taylor Berg-Kirkpatrick:
A Block Metropolis-Hastings Sampler for Controllable Energy-based Text Generation. CoRR abs/2312.04510 (2023) - 2022
- [c13]Fatemehsadat Mireshghallah, Kartik Goyal, Taylor Berg-Kirkpatrick:
Mix and Match: Learning-free Controllable Text Generationusing Energy Language Models. ACL (1) 2022: 401-415 - [c12]Fatemehsadat Mireshghallah, Archit Uniyal, Tianhao Wang, David E. Evans, Taylor Berg-Kirkpatrick:
An Empirical Analysis of Memorization in Fine-tuned Autoregressive Language Models. EMNLP 2022: 1816-1826 - [c11]Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, Reza Shokri:
Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks. EMNLP 2022: 8332-8347 - [c10]Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, Florian Tramèr:
What Does it Mean for a Language Model to Preserve Privacy? FAccT 2022: 2280-2292 - [c9]Fatemehsadat Mireshghallah, Vaishnavi Shrivastava, Milad Shokouhi, Taylor Berg-Kirkpatrick, Robert Sim, Dimitrios Dimitriadis:
UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis. NAACL-HLT 2022: 3449-3456 - [c8]Fatemehsadat Mireshghallah, Arturs Backurs, Huseyin A. Inan, Lukas Wutschitz, Janardhan Kulkarni:
Differentially Private Model Compression. NeurIPS 2022 - [i20]Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, Florian Tramèr:
What Does it Mean for a Language Model to Preserve Privacy? CoRR abs/2202.05520 (2022) - [i19]Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, Reza Shokri:
Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks. CoRR abs/2203.03929 (2022) - [i18]Fatemehsadat Mireshghallah, Kartik Goyal, Taylor Berg-Kirkpatrick:
Mix and Match: Learning-free Controllable Text Generation using Energy Language Models. CoRR abs/2203.13299 (2022) - [i17]Fatemehsadat Mireshghallah, Archit Uniyal, Tianhao Wang, David Evans, Taylor Berg-Kirkpatrick:
Memorization in NLP Fine-tuning Methods. CoRR abs/2205.12506 (2022) - [i16]Fatemehsadat Mireshghallah, Arturs Backurs, Huseyin A. Inan, Lukas Wutschitz, Janardhan Kulkarni:
Differentially Private Model Compression. CoRR abs/2206.01838 (2022) - [i15]Fatemehsadat Mireshghallah, Nikolai Vogler, Junxian He, Omar Florez, Ahmed El-Kishky, Taylor Berg-Kirkpatrick:
Non-Parametric Temporal Adaptation for Social Media Topic Classification. CoRR abs/2209.05706 (2022) - [i14]Fatemehsadat Mireshghallah, Richard Shin, Yu Su, Tatsunori Hashimoto, Jason Eisner:
Privacy-Preserving Domain Adaptation of Semantic Parsers. CoRR abs/2212.10520 (2022) - 2021
- [c7]Fatemehsadat Mireshghallah, Taylor Berg-Kirkpatrick:
Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness. EMNLP (1) 2021: 2009-2022 - [c6]Teddy Koker, Fatemehsadat Mireshghallah, Tom Titcombe, Georgios Kaissis:
U-Noise: Learnable Noise Masks for Interpretable Image Segmentation. ICIP 2021: 394-398 - [c5]Fatemehsadat Mireshghallah, Huseyin A. Inan, Marcello Hasegawa, Victor Rühle, Taylor Berg-Kirkpatrick, Robert Sim:
Privacy Regularization: Joint Privacy-Utility Optimization in LanguageModels. NAACL-HLT 2021: 3799-3807 - [c4]Fatemehsadat Mireshghallah, Mohammadkazem Taram, Ali Jalali, Ahmed Taha Elthakeb, Dean M. Tullsen, Hadi Esmaeilzadeh:
Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy. WWW 2021: 669-680 - [i13]Teddy Koker, Fatemehsadat Mireshghallah, Tom Titcombe, Georgios Kaissis:
U-Noise: Learnable Noise Masks for Interpretable Image Segmentation. CoRR abs/2101.05791 (2021) - [i12]Fatemehsadat Mireshghallah, Huseyin A. Inan, Marcello Hasegawa, Victor Rühle, Taylor Berg-Kirkpatrick, Robert Sim:
Privacy Regularization: Joint Privacy-Utility Optimization in Language Models. CoRR abs/2103.07567 (2021) - [i11]Archit Uniyal, Rakshit Naidu, Sasikanth Kotti, Sahib Singh, Patrik Joslin Kenfack, Fatemehsadat Mireshghallah, Andrew Trask:
DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy? CoRR abs/2106.12576 (2021) - [i10]Rakshit Naidu, Aman Priyanshu, Aadith Kumar, Sasikanth Kotti, Haofan Wang, Fatemehsadat Mireshghallah:
When Differential Privacy Meets Interpretability: A Case Study. CoRR abs/2106.13203 (2021) - [i9]Priyam Basu, Tiasa Singha Roy, Rakshit Naidu, Zümrüt Müftüoglu, Sahib Singh, Fatemehsadat Mireshghallah:
Benchmarking Differential Privacy and Federated Learning for BERT Models. CoRR abs/2106.13973 (2021) - [i8]Aman Priyanshu, Rakshit Naidu, Fatemehsadat Mireshghallah, Mohammad Malekzadeh:
Efficient Hyperparameter Optimization for Differentially Private Deep Learning. CoRR abs/2108.03888 (2021) - [i7]Fatemehsadat Mireshghallah, Taylor Berg-Kirkpatrick:
Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness. CoRR abs/2109.04624 (2021) - [i6]Fatemehsadat Mireshghallah, Vaishnavi Shrivastava, Milad Shokouhi, Taylor Berg-Kirkpatrick, Robert Sim, Dimitrios Dimitriadis:
UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis. CoRR abs/2110.00135 (2021) - 2020
- [j2]Ahmed T. Elthakeb, Prannoy Pilligundla, Fatemehsadat Mireshghallah, Amir Yazdanbakhsh, Hadi Esmaeilzadeh:
ReLeQ : A Reinforcement Learning Approach for Automatic Deep Quantization of Neural Networks. IEEE Micro 40(5): 37-45 (2020) - [c3]Fatemehsadat Mireshghallah, Mohammadkazem Taram, Prakash Ramrakhyani, Ali Jalali, Dean M. Tullsen, Hadi Esmaeilzadeh:
Shredder: Learning Noise Distributions to Protect Inference Privacy. ASPLOS 2020: 3-18 - [c2]Tom Farrand, Fatemehsadat Mireshghallah, Sahib Singh, Andrew Trask:
Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy. PPMLP@CCS 2020: 15-19 - [c1]Ahmed Taha Elthakeb, Prannoy Pilligundla, Fatemeh Mireshghallah, Alexander Cloninger, Hadi Esmaeilzadeh:
Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks. ICML 2020: 2880-2891 - [i5]Ahmed T. Elthakeb, Prannoy Pilligundla, Fatemehsadat Mireshghallah, Tarek Elgindi, Charles-Alban Deledalle, Hadi Esmaeilzadeh:
Gradient-Based Deep Quantization of Neural Networks through Sinusoidal Adaptive Regularization. CoRR abs/2003.00146 (2020) - [i4]Fatemehsadat Mireshghallah, Mohammadkazem Taram, Ali Jalali, Ahmed Taha Elthakeb, Dean M. Tullsen, Hadi Esmaeilzadeh:
A Principled Approach to Learning Stochastic Representations for Privacy in Deep Neural Inference. CoRR abs/2003.12154 (2020) - [i3]Fatemehsadat Mireshghallah, Mohammadkazem Taram, Praneeth Vepakomma, Abhishek Singh, Ramesh Raskar, Hadi Esmaeilzadeh:
Privacy in Deep Learning: A Survey. CoRR abs/2004.12254 (2020) - [i2]Tom Farrand, Fatemehsadat Mireshghallah, Sahib Singh, Andrew Trask:
Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy. CoRR abs/2009.06389 (2020)
2010 – 2019
- 2019
- [j1]Fatemehsadat Mireshghallah, Mohammad Bakhshalipour, Mohammad Sadrosadati, Hamid Sarbazi-Azad:
Energy-Efficient Permanent Fault Tolerance in Hard Real-Time Systems. IEEE Trans. Computers 68(10): 1539-1545 (2019) - [i1]Fatemehsadat Mireshghallah, Mohammadkazem Taram, Prakash Ramrakhyani, Dean M. Tullsen, Hadi Esmaeilzadeh:
Shredder: Learning Noise to Protect Privacy with Partial DNN Inference on the Edge. CoRR abs/1905.11814 (2019)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-28 20:31 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint