{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,7]],"date-time":"2026-05-07T15:11:18Z","timestamp":1778166678599,"version":"3.51.4"},"reference-count":167,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2024,12,10]],"date-time":"2024-12-10T00:00:00Z","timestamp":1733788800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2022YFB3103100"],"award-info":[{"award-number":["2022YFB3103100"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62122032, and U23B2023"],"award-info":[{"award-number":["62122032, and U23B2023"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2025,4,30]]},"abstract":"<jats:p>The advent of artificial intelligence-generated content (AIGC) represents a pivotal moment in the evolution of information technology. With AIGC, it can be effortless to generate high-quality data that is challenging for the public to distinguish. Nevertheless, the proliferation of generative data across cyberspace brings security and privacy issues, including privacy leakages of individuals and media forgery for fraudulent purposes. Consequently, both academia and industry begin to emphasize the trustworthiness of generative data, successively providing a series of countermeasures for security and privacy. In this survey, we systematically review the security and privacy on generative data in AIGC, particularly for the first time analyzing them from the perspective of information security properties. Specifically, we reveal the successful experiences of state-of-the-art countermeasures in terms of the foundational properties of privacy, controllability, authenticity, and compliance, respectively. Finally, we show some representative benchmarks, present a statistical analysis, and summarize the potential exploration directions from each of these properties.<\/jats:p>","DOI":"10.1145\/3703626","type":"journal-article","created":{"date-parts":[[2024,11,7]],"date-time":"2024-11-07T10:59:21Z","timestamp":1730977161000},"page":"1-34","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":48,"title":["Security and Privacy on Generative Data in AIGC: A Survey"],"prefix":"10.1145","volume":"57","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5532-3999","authenticated-orcid":false,"given":"Tao","family":"Wang","sequence":"first","affiliation":[{"name":"Nanjing University of Aeronautics and Astronautics, Nanjing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8183-8435","authenticated-orcid":false,"given":"Yushu","family":"Zhang","sequence":"additional","affiliation":[{"name":"Nanjing University of Aeronautics and Astronautics, Nanjing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0574-2313","authenticated-orcid":false,"given":"Shuren","family":"Qi","sequence":"additional","affiliation":[{"name":"Nanjing University of Aeronautics and Astronautics, Nanjing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3631-1890","authenticated-orcid":false,"given":"Ruoyu","family":"Zhao","sequence":"additional","affiliation":[{"name":"Nanjing University of Aeronautics and Astronautics, Nanjing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6860-647X","authenticated-orcid":false,"given":"Zhihua","family":"Xia","sequence":"additional","affiliation":[{"name":"Jinan University, Guangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4067-8230","authenticated-orcid":false,"given":"Jian","family":"Weng","sequence":"additional","affiliation":[{"name":"Jinan University, Guangzhou, China"}]}],"member":"320","published-online":{"date-parts":[[2024,12,10]]},"reference":[{"key":"e_1_3_2_2_2","unstructured":"Romain Beaumont. 2022. Clip Retrieval: Easily compute clip embeddings and build a clip retrieval system with them. Retrieved from https:\/\/rom1504:github:io\/clip-retrieval\/"},{"key":"e_1_3_2_3_2","unstructured":"2022. DALL \\(\\cdot\\) E 2 Pre-training Mitigations. OpenAI. Retrieved from https:\/\/openai.com\/research\/dall-e-2-pre-training-mitigations"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW60793.2023.00051"},{"key":"e_1_3_2_5_2","unstructured":"2023. Disinformation Researchers Raise Alarms About A.I. Chatbots. New York Times. Retrieved from https:\/\/www.nytimes.com\/2023\/02\/08\/technology\/ai-chatbots-disinformation.html"},{"key":"e_1_3_2_6_2","unstructured":"2023. Fact Check: Was There an Explosion at the Pentagon? Newsweek. Retrieved from https:\/\/www.newsweek.com"},{"key":"e_1_3_2_7_2","unstructured":"2023. Interim Regulation on the Management of Generative Artificial intelligence (AI) Services. Cyberspace Administration of China. Retrieved from https:\/\/www.gov.cn\/zhengce\/zhengceku\/202307\/content_6891752.htm"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1145\/2976749.2978318"},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.1145\/3461702.3462624"},{"key":"e_1_3_2_10_2","first-page":"290","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201922)","author":"Alaa Ahmed","year":"2022","unstructured":"Ahmed Alaa, Boris Van Breugel, Evgeny S. Saveliev, and Mihaela van der Schaar. 2022. How faithful is your synthetic data? Sample-level metrics for evaluating and auditing generative models. In Proceedings of the International Conference on Machine Learning (ICML\u201922). PMLR, 290\u2013306."},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/AIVR56993.2022.00019"},{"key":"e_1_3_2_12_2","first-page":"1456","volume-title":"Proceedings of the 41st International Conference on Machine Learning (ICML\u201924)","volume":"235","author":"An Bang","year":"2024","unstructured":"Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, Chenghao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, and Furong Huang. 2024. WAVES: Benchmarking the robustness of image watermarks. In Proceedings of the 41st International Conference on Machine Learning (ICML\u201924), Vol. 235. 1456\u20131492."},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.01027"},{"key":"e_1_3_2_14_2","article-title":"The internal state of an LLM knows when its lying","author":"Azaria Amos","year":"2023","unstructured":"Amos Azaria and Tom Mitchell. 2023. The internal state of an LLM knows when its lying. arXiv preprint arXiv:2304.13734 (2023).","journal-title":"arXiv preprint arXiv:2304.13734"},{"key":"e_1_3_2_15_2","article-title":"Reducing training sample memorization in GANs by training with memorization rejection","author":"Bai Andrew","year":"2022","unstructured":"Andrew Bai, Cho-Jui Hsieh, Wendy Kan, and Hsuan-Tien Lin. 2022. Reducing training sample memorization in GANs by training with memorization rejection. arXiv preprint arXiv:2210.12231 (2022).","journal-title":"arXiv preprint arXiv:2210.12231"},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445922"},{"key":"e_1_3_2_17_2","article-title":"Detecting generated images by real images only","author":"Bi Xiuli","year":"2023","unstructured":"Xiuli Bi, Bo Liu, Fan Yang, Bin Xiao, Weisheng Li, Gao Huang, and Pamela C. Cosman. 2023. Detecting generated images by real images only. arXiv preprint arXiv:2311.00962 (2023).","journal-title":"arXiv preprint arXiv:2311.00962"},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2024.3356122"},{"key":"e_1_3_2_19_2","unstructured":"Abeba Birhane Vinay Prabhu Sang Han Vishnu Naresh Boddeti and Alexandra Sasha Luccioni. 2024. Into the LAION\u2019s Den: Investigating hate in multimodal datasets. Advan. Neural Inf. Process. Syst. 36 (2024)."},{"key":"e_1_3_2_20_2","article-title":"Multimodal datasets: Misogyny, pornography, and malignant stereotypes","author":"Birhane Abeba","year":"2021","unstructured":"Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. 2021. Multimodal datasets: Misogyny, pornography, and malignant stereotypes. arXiv preprint arXiv:2110.01963 (2021).","journal-title":"arXiv preprint arXiv:2110.01963"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP40001.2021.00019"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.01800"},{"key":"e_1_3_2_23_2","article-title":"Mitigating inappropriateness in image generation: Can there be value in reflecting the world\u2019s ugliness?","author":"Brack Manuel","year":"2023","unstructured":"Manuel Brack, Felix Friedrich, Patrick Schramowski, and Kristian Kersting. 2023. Mitigating inappropriateness in image generation: Can there be value in reflecting the world\u2019s ugliness? arXiv preprint arXiv:2305.18398 (2023).","journal-title":"arXiv preprint arXiv:2305.18398"},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-19781-9_9"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1126\/science.aal4230"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467158"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.5555\/3620237.3620531"},{"key":"e_1_3_2_28_2","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201923)","author":"Carlini Nicholas","year":"2023","unstructured":"Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. 2023. Quantifying memorization across neural language models. In Proceedings of the International Conference on Learning Representations (ICLR\u201923)."},{"key":"e_1_3_2_29_2","first-page":"2633","volume-title":"Proceedings of the 30th USENIX Security Symposium (USENIX Security\u201921)","author":"Carlini Nicholas","year":"2021","unstructured":"Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson et\u00a0al. 2021. Extracting training data from large language models. In Proceedings of the 30th USENIX Security Symposium (USENIX Security\u201921). 2633\u20132650."},{"key":"e_1_3_2_30_2","article-title":"Challenges and remedies to privacy and security in AIGC: Exploring the potential of privacy computing, blockchain, and beyond","author":"Chen Chuan","year":"2023","unstructured":"Chuan Chen, Zhenpeng Wu, Yanyi Lai, Wenlin Ou, Tianchi Liao, and Zibin Zheng. 2023. Challenges and remedies to privacy and security in AIGC: Exploring the potential of privacy computing, blockchain, and beyond. arXiv preprint arXiv:2306.00419 (2023).","journal-title":"arXiv preprint arXiv:2306.00419"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/MNET.2024.3366560"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00641"},{"key":"e_1_3_2_33_2","volume-title":"Proceedings of the 12th International Conference on Learning Representations (ICLR\u201924)","author":"Chen Kai","year":"2024","unstructured":"Kai Chen, Chunwei Wang, Kuo Yang, Jianhua Han, Lanqing HONG, Fei Mi, Hang Xu, Zhengying Liu, Wenyong Huang, Zhenguo Li, Dit-Yan Yeung, and Lifeng Shang. 2024. Gaining wisdom from setbacks: Aligning large language models via mistake analysis. In Proceedings of the 12th International Conference on Learning Representations (ICLR\u201924). Retrieved from https:\/\/openreview.net\/forum?id=aA33A70IO6"},{"key":"e_1_3_2_34_2","first-page":"Curran Associat","volume-title":"Advances in Neural Information Processing Systems","author":"Chen Shiqi","year":"2023","unstructured":"Shiqi Chen, Yiran Zhao, Jinghan Zhang, I-Chun Chern, Siyang Gao, Pengfei Liu, and Junxian He. 2023. FELM: Benchmarking factuality evaluation of large language models. In Advances in Neural Information Processing Systems, Vol. 36. Curran Associates Inc., 44502\u201344523."},{"key":"e_1_3_2_35_2","article-title":"GPT-Sentinel: Distinguishing human and ChatGPT generated content","author":"Chen Yutian","year":"2023","unstructured":"Yutian Chen, Hao Kang, Vivian Zhai, Liangze Li, Rita Singh, and Bhiksha Ramakrishnan. 2023. GPT-Sentinel: Distinguishing human and ChatGPT generated content. arXiv preprint arXiv:2305.07969 (2023).","journal-title":"arXiv preprint arXiv:2305.07969"},{"issue":"3","key":"e_1_3_2_36_2","first-page":"4","article-title":"Efficient video generation on complex datasets","volume":"2","author":"Clark Aidan","year":"2019","unstructured":"Aidan Clark, Jeff Donahue, and Karen Simonyan. 2019. Efficient video generation on complex datasets. arXiv preprint arXiv:1907.06571 2, 3 (2019), 4.","journal-title":"arXiv preprint arXiv:1907.06571"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP49357.2023.10095167"},{"key":"e_1_3_2_38_2","article-title":"DiffusionShield: A watermark for copyright protection against generative diffusion models","author":"Cui Yingqian","year":"2023","unstructured":"Yingqian Cui, Jie Ren, Han Xu, Pengfei He, Hui Liu, Lichao Sun, and Jiliang Tang. 2023. DiffusionShield: A watermark for copyright protection against generative diffusion models. arXiv preprint arXiv:2306.04642 (2023).","journal-title":"arXiv preprint arXiv:2306.04642"},{"key":"e_1_3_2_39_2","article-title":"Differentially private diffusion models","author":"Dockhorn Tim","year":"2022","unstructured":"Tim Dockhorn, Tianshi Cao, Arash Vahdat, and Karsten Kreis. 2022. Differentially private diffusion models. arXiv preprint arXiv:2210.09929 (2022).","journal-title":"arXiv preprint arXiv:2210.09929"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3592572.3592846"},{"key":"e_1_3_2_41_2","article-title":"Improving factuality and reasoning in language models through multiagent debate","author":"Du Yilun","year":"2023","unstructured":"Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. 2023. Improving factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325 (2023).","journal-title":"arXiv preprint arXiv:2305.14325"},{"key":"e_1_3_2_42_2","article-title":"Truthful AI: Developing and governing AI that does not lie","author":"Evans Owain","year":"2021","unstructured":"Owain Evans, Owen Cotton-Barratt, Lukas Finnveden, Adam Bales, Avital Balwit, Peter Wills, Luca Righetti, and William Saunders. 2021. Truthful AI: Developing and governing AI that does not lie. arXiv preprint arXiv:2110.06674 (2021).","journal-title":"arXiv preprint arXiv:2110.06674"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0251415"},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00663"},{"key":"e_1_3_2_45_2","article-title":"Catch you everything everywhere: Guarding textual inversion via concept watermarking","author":"Feng Weitao","year":"2023","unstructured":"Weitao Feng, Jiyan He, Jie Zhang, Tianwei Zhang, Wenbo Zhou, Weiming Zhang, and Nenghai Yu. 2023. Catch you everything everywhere: Guarding textual inversion via concept watermarking. arXiv preprint arXiv:2309.05940 (2023).","journal-title":"arXiv preprint arXiv:2309.05940"},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.02053"},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00230"},{"key":"e_1_3_2_48_2","article-title":"Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned","author":"Ganguli Deep","year":"2022","unstructured":"Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse et\u00a0al. 2022. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858 (2022).","journal-title":"arXiv preprint arXiv:2209.07858"},{"key":"e_1_3_2_49_2","article-title":"GLTR: Statistical detection and visualization of generated text","author":"Gehrmann Sebastian","year":"2019","unstructured":"Sebastian Gehrmann, Hendrik Strobelt, and Alexander M. Rush. 2019. GLTR: Statistical detection and visualization of generated text. arXiv preprint arXiv:1906.04043 (2019).","journal-title":"arXiv preprint arXiv:1906.04043"},{"key":"e_1_3_2_50_2","article-title":"Differentially private diffusion models generate useful synthetic images","author":"Ghalebikesabi Sahra","year":"2023","unstructured":"Sahra Ghalebikesabi, Leonard Berrada, Sven Gowal, Ira Ktena, Robert Stanforth, Jamie Hayes, Soham De, Samuel L. Smith, Olivia Wiles, and Borja Balle. 2023. Differentially private diffusion models generate useful synthetic images. arXiv preprint arXiv:2302.13861 (2023).","journal-title":"arXiv preprint arXiv:2302.13861"},{"key":"e_1_3_2_51_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01383"},{"key":"e_1_3_2_52_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2020.3027617"},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.1145\/3422622"},{"key":"e_1_3_2_54_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330955"},{"key":"e_1_3_2_55_2","volume-title":"Proceedings of the 12th International Conference on Learning Representations (ICLR\u201924)","author":"Gou Zhibin","year":"2024","unstructured":"Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, and Weizhu Chen. 2024. CRITIC: Large language models can self-correct with tool-interactive critiquing. In Proceedings of the 12th International Conference on Learning Representations (ICLR\u201924). Retrieved from https:\/\/openreview.net\/forum?id=Sx038qxjek"},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-031-66431-1_43"},{"key":"e_1_3_2_57_2","article-title":"How close is ChatGPT to human experts? Comparison corpus, evaluation, and detection","author":"Guo Biyang","year":"2023","unstructured":"Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, and Yupeng Wu. 2023. How close is ChatGPT to human experts? Comparison corpus, evaluation, and detection. arXiv preprint arXiv:2301.07597 (2023).","journal-title":"arXiv preprint arXiv:2301.07597"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.1145\/3658644.3670344"},{"key":"e_1_3_2_59_2","first-page":"29217","article-title":"Pile of Law: Learning responsible data filtering from the law and a 256GB open-source legal dataset","volume":"35","author":"Henderson Peter","year":"2022","unstructured":"Peter Henderson, Mark Krass, Lucia Zheng, Neel Guha, Christopher D. Manning, Dan Jurafsky, and Daniel Ho. 2022. Pile of Law: Learning responsible data filtering from the law and a 256GB open-source legal dataset. Advan. Neural Inf. Process. Syst. 35 (2022), 29217\u201329234.","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_2_60_2","article-title":"Selective amnesia: A continual learning approach to forgetting in deep generative models","volume":"36","author":"Heng Alvin","year":"2024","unstructured":"Alvin Heng and Harold Soh. 2024. Selective amnesia: A continual learning approach to forgetting in deep generative models. Advan. Neural Inf. Process. Syst. 36 (2024), 17170\u201317194.","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_2_61_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2023.3235969"},{"key":"e_1_3_2_62_2","first-page":"6840","article-title":"Denoising diffusion probabilistic models","volume":"33","author":"Ho Jonathan","year":"2020","unstructured":"Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advan. Neural Inf. Process. Syst. 33 (2020), 6840\u20136851.","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/3487890"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV56688.2023.00138"},{"key":"e_1_3_2_65_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV48630.2021.00339"},{"key":"e_1_3_2_66_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW59228.2023.00096"},{"key":"e_1_3_2_67_2","article-title":"AIGC-Chain: A blockchain-enabled full lifecycle recording system for AIGC product copyright management","author":"Jiang Jiajia","year":"2024","unstructured":"Jiajia Jiang, Moting Su, Xiangli Xiao, Yushu Zhang, and Yuming Fang. 2024. AIGC-Chain: A blockchain-enabled full lifecycle recording system for AIGC product copyright management. arXiv preprint arXiv:2406.14966 (2024).","journal-title":"arXiv preprint arXiv:2406.14966"},{"key":"e_1_3_2_68_2","first-page":"1009","volume-title":"Proceedings of the 33rd USENIX Security Symposium (USENIX Security\u201924)","author":"Joslin Matthew","year":"2024","unstructured":"Matthew Joslin, Xian Wang, and Shuang Hao. 2024. Double face: Leveraging user intelligence to characterize and recognize AI-synthesized faces. In Proceedings of the 33rd USENIX Security Symposium (USENIX Security\u201924). 1009\u20131026."},{"key":"e_1_3_2_69_2","first-page":"10697","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201922)","author":"Kandpal Nikhil","year":"2022","unstructured":"Nikhil Kandpal, Eric Wallace, and Colin Raffel. 2022. Deduplicating training data mitigates privacy risks in language models. In Proceedings of the International Conference on Machine Learning (ICML\u201922). PMLR, 10697\u201310707."},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00453"},{"key":"e_1_3_2_71_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01223"},{"key":"e_1_3_2_72_2","article-title":"DeepFakes: A new threat to face recognition? Assessment and detection","author":"Korshunov Pavel","year":"2018","unstructured":"Pavel Korshunov and S\u00e9bastien Marcel. 2018. DeepFakes: A new threat to face recognition? Assessment and detection. arXiv preprint arXiv:1812.08685 (2018).","journal-title":"arXiv preprint arXiv:1812.08685"},{"key":"e_1_3_2_73_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.02074"},{"key":"e_1_3_2_74_2","first-page":"34586","article-title":"Factuality enhanced language models for open-ended text generation","volume":"35","author":"Lee Nayeon","year":"2022","unstructured":"Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, Pascale N. Fung, Mohammad Shoeybi, and Bryan Catanzaro. 2022. Factuality enhanced language models for open-ended text generation. Advan. Neural Inf. Process. Syst. 35 (2022), 34586\u201334599.","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_2_75_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.acl-long.4"},{"key":"e_1_3_2_76_2","first-page":"4837","volume-title":"Proceedings of the 33rd USENIX Security Symposium (USENIX Security\u201924)","author":"Li Kecen","year":"2024","unstructured":"Kecen Li, Chen Gong, Zhixiang Li, Yuzhong Zhao, Xinwen Hou, and Tianhao Wang. 2024. PrivImage: Differentially private synthetic image generation using diffusion models with semantic-aware pretraining. In Proceedings of the 33rd USENIX Security Symposium (USENIX Security\u201924). 4837\u20134854."},{"key":"e_1_3_2_77_2","volume-title":"Proceedings of the USENIX Security Symposium (USENIX Security\u201923)","author":"Li Zheng","year":"2023","unstructured":"Zheng Li, Ning Yu, Ahmed Salem, Michael Backes, Mario Fritz, and Yang Zhang. 2023. UnGANable: Defending against GAN-based face manipulation. In Proceedings of the USENIX Security Symposium (USENIX Security\u201923). USENIX."},{"key":"e_1_3_2_78_2","first-page":"20763","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201923)","author":"Liang Chumeng","year":"2023","unstructured":"Chumeng Liang, Xiaoyu Wu, Yang Hua, Jiaru Zhang, Yiming Xue, Tao Song, Zhengui Xue, Ruhui Ma, and Haibing Guan. 2023. Adversarial example does good: Preventing painting imitation from diffusion models via adversarial examples. In Proceedings of the International Conference on Machine Learning (ICML\u201923). PMLR, 20763\u201320786."},{"key":"e_1_3_2_79_2","article-title":"Detecting multimedia generated by large AI models: A survey","author":"Lin Li","year":"2024","unstructured":"Li Lin, Neeraj Gupta, Yue Zhang, Hainan Ren, Chun-Hao Liu, Feng Ding, Xin Wang, Xin Li, Luisa Verdoliva, and Shu Hu. 2024. Detecting multimedia generated by large AI models: A survey. arXiv preprint arXiv:2402.00045 (2024).","journal-title":"arXiv preprint arXiv:2402.00045"},{"key":"e_1_3_2_80_2","article-title":"A survey of text watermarking in the era of large language models","author":"Liu Aiwei","year":"2023","unstructured":"Aiwei Liu, Leyi Pan, Yijian Lu, Jingjing Li, Xuming Hu, Lijie Wen, Irwin King, and Philip S. Yu. 2023. A survey of text watermarking in the era of large language models. arXiv preprint arXiv:2312.07913 (2023).","journal-title":"arXiv preprint arXiv:2312.07913"},{"key":"e_1_3_2_81_2","doi-asserted-by":"publisher","DOI":"10.1145\/3436755"},{"key":"e_1_3_2_82_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV56688.2023.00467"},{"key":"e_1_3_2_83_2","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2024.24200"},{"key":"e_1_3_2_84_2","doi-asserted-by":"publisher","DOI":"10.1145\/3477495.3532044"},{"key":"e_1_3_2_85_2","doi-asserted-by":"publisher","DOI":"10.1109\/MNET.2024.3352917"},{"key":"e_1_3_2_86_2","article-title":"DiffProtect: Generate adversarial examples with diffusion models for facial privacy protection","author":"Liu Jiang","year":"2023","unstructured":"Jiang Liu, Chun Pong Lau, and Rama Chellappa. 2023. DiffProtect: Generate adversarial examples with diffusion models for facial privacy protection. arXiv preprint arXiv:2305.13625 (2023).","journal-title":"arXiv preprint arXiv:2305.13625"},{"key":"e_1_3_2_87_2","doi-asserted-by":"publisher","DOI":"10.1109\/MWC.003.2300053"},{"key":"e_1_3_2_88_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.02286"},{"key":"e_1_3_2_89_2","article-title":"Watermarking diffusion model","author":"Liu Yugeng","year":"2023","unstructured":"Yugeng Liu, Zheng Li, Michael Backes, Yun Shen, and Yang Zhang. 2023. Watermarking diffusion model. arXiv preprint arXiv:2305.12502 (2023).","journal-title":"arXiv preprint arXiv:2305.12502"},{"key":"e_1_3_2_90_2","article-title":"Check me if you can: Detecting ChatGPT-generated academic writing using CheckGPT","author":"Liu Zeyan","year":"2023","unstructured":"Zeyan Liu, Zijun Yao, Fengjun Li, and Bo Luo. 2023. Check me if you can: Detecting ChatGPT-generated academic writing using CheckGPT. arXiv preprint arXiv:2306.05524 (2023).","journal-title":"arXiv preprint arXiv:2306.05524"},{"key":"e_1_3_2_91_2","doi-asserted-by":"publisher","DOI":"10.1109\/TASLP.2023.3294715"},{"key":"e_1_3_2_92_2","article-title":"Machine learning for synthetic data generation: A review","author":"Lu Yingzhou","year":"2023","unstructured":"Yingzhou Lu, Huazheng Wang, and Wenqi Wei. 2023. Machine learning for synthetic data generation: A review. arXiv preprint arXiv:2302.04062 (2023).","journal-title":"arXiv preprint arXiv:2302.04062"},{"key":"e_1_3_2_93_2","article-title":"Seeing is not always believing: A quantitative study on human perception of AI-generated images","author":"Lu Zeyu","year":"2023","unstructured":"Zeyu Lu, Di Huang, Lei Bai, Xihui Liu, Jingjing Qu, and Wanli Ouyang. 2023. Seeing is not always believing: A quantitative study on human perception of AI-generated images. arXiv preprint arXiv:2304.13023 (2023).","journal-title":"arXiv preprint arXiv:2304.13023"},{"key":"e_1_3_2_94_2","first-page":"Curran Associat","volume-title":"Advances in Neural Information Processing Systems","author":"Lu Zeyu","year":"2023","unstructured":"Zeyu Lu, Di Huang, Lei Bai, Jingjing Qu, Chengyue Wu, Xihui Liu, and Wanli Ouyang. 2023. Seeing is not always believing: Benchmarking human and model perception of AI-generated images. In Advances in Neural Information Processing Systems, Vol. 36. Curran Associates Inc., 25435\u201325447."},{"key":"e_1_3_2_95_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2023\/803"},{"key":"e_1_3_2_96_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2023.3290175"},{"key":"e_1_3_2_97_2","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2022.3233580"},{"key":"e_1_3_2_98_2","volume-title":"Proceedings of the 2nd Workshop on New Frontiers in Adversarial Machine Learning (ICML\u201923)","author":"Ma RuiPeng","year":"2023","unstructured":"RuiPeng Ma, Jinhao Duan, Fei Kong, Xiaoshuang Shi, and Kaidi Xu. 2023. Exposing the fake: Effective diffusion-generated images detection. In Proceedings of the 2nd Workshop on New Frontiers in Adversarial Machine Learning (ICML\u201923). Retrieved from https:\/\/openreview.net\/forum?id=7R62e4Wgim"},{"key":"e_1_3_2_99_2","article-title":"Generative watermarking against unauthorized subject-driven image synthesis","author":"Ma Yihan","year":"2023","unstructured":"Yihan Ma, Zhengyu Zhao, Xinlei He, Zheng Li, Michael Backes, and Yang Zhang. 2023. Generative watermarking against unauthorized subject-driven image synthesis. arXiv preprint arXiv:2306.07754 (2023).","journal-title":"arXiv preprint arXiv:2306.07754"},{"key":"e_1_3_2_100_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v37i12.26752"},{"key":"e_1_3_2_101_2","volume-title":"Proceedings of the International Conference on Artificial Intelligence and Statistics","author":"Meehan Casey","year":"2020","unstructured":"Casey Meehan, Kamalika Chaudhuri, and Sanjoy Dasgupta. 2020. A non-parametric test to detect data-copying in generative models. In Proceedings of the International Conference on Artificial Intelligence and Statistics."},{"key":"e_1_3_2_102_2","first-page":"3086","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV\u201923)","author":"Melzi Pietro","year":"2023","unstructured":"Pietro Melzi, Christian Rathgeb, Ruben Tolosana, Ruben Vera-Rodriguez, Dominik Lawatsch, Florian Domin, and Maxim Schaubert. 2023. GANDifFace: Controllable generation of synthetic datasets for face recognition with realistic variations. In Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV\u201923). 3086\u20133095."},{"key":"e_1_3_2_103_2","doi-asserted-by":"publisher","DOI":"10.1145\/3425780"},{"key":"e_1_3_2_104_2","first-page":"24950","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201923)","author":"Mitchell Eric","year":"2023","unstructured":"Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning, and Chelsea Finn. 2023. DetectGPT: Zero-shot machine-generated text detection using probability curvature. In Proceedings of the International Conference on Machine Learning (ICML\u201923). PMLR, 24950\u201324962."},{"key":"e_1_3_2_105_2","article-title":"GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models","author":"Nichol Alex","year":"2021","unstructured":"Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. 2021. GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021).","journal-title":"arXiv preprint arXiv:2112.10741"},{"key":"e_1_3_2_106_2","article-title":"To ChatGPT, or not to ChatGPT: That is the question!","author":"Pegoraro Alessandro","year":"2023","unstructured":"Alessandro Pegoraro, Kavita Kumari, Hossein Fereidooni, and Ahmad-Reza Sadeghi. 2023. To ChatGPT, or not to ChatGPT: That is the question! arXiv preprint arXiv:2304.01487 (2023).","journal-title":"arXiv preprint arXiv:2304.01487"},{"key":"e_1_3_2_107_2","article-title":"You are what you write: Preserving privacy in the era of large language models","author":"Plant Richard","year":"2022","unstructured":"Richard Plant, Valerio Giuffrida, and Dimitra Gkatzia. 2022. You are what you write: Preserving privacy in the era of large language models. arXiv preprint arXiv:2204.09391 (2022).","journal-title":"arXiv preprint arXiv:2204.09391"},{"key":"e_1_3_2_108_2","article-title":"Hierarchical invariance for robust and interpretable vision tasks at larger scales","author":"Qi Shuren","year":"2024","unstructured":"Shuren Qi, Yushu Zhang, Chao Wang, Zhihua Xia, Jian Weng, and Xiaochun Cao. 2024. Hierarchical invariance for robust and interpretable vision tasks at larger scales. arXiv preprint arXiv:2402.15430 (2024).","journal-title":"arXiv preprint arXiv:2402.15430"},{"key":"e_1_3_2_109_2","doi-asserted-by":"publisher","DOI":"10.1145\/3576915.3616679"},{"key":"e_1_3_2_110_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP49359.2023.10222083"},{"key":"e_1_3_2_111_2","article-title":"Red-teaming the stable diffusion safety filter","author":"Rando Javier","year":"2022","unstructured":"Javier Rando, Daniel Paleka, David Lindner, Lennard Heim, and Florian Tram\u00e8r. 2022. Red-teaming the stable diffusion safety filter. arXiv preprint arXiv:2210.04610 (2022).","journal-title":"arXiv preprint arXiv:2210.04610"},{"key":"e_1_3_2_112_2","article-title":"SynSciPass: Detecting appropriate uses of scientific text generation","author":"Rosati Domenic","year":"2022","unstructured":"Domenic Rosati. 2022. SynSciPass: Detecting appropriate uses of scientific text generation. arXiv preprint arXiv:2209.03742 (2022).","journal-title":"arXiv preprint arXiv:2209.03742"},{"key":"e_1_3_2_113_2","unstructured":"Alaa Saleh Roberto Morabito Sasu Tarkoma Susanna Pirttikangas and Lauri Lov\u00e9n. 2024. Towards Message Brokers for Generative AI: Survey Challenges and Opportunities. arxiv:2312.14647 [cs.DC]"},{"key":"e_1_3_2_114_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.02157"},{"key":"e_1_3_2_115_2","first-page":"25278","article-title":"LAION-5B: An open large-scale dataset for training next generation image-text models","volume":"35","author":"Schuhmann Christoph","year":"2022","unstructured":"Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et\u00a0al. 2022. LAION-5B: An open large-scale dataset for training next generation image-text models. Advan. Neural Inf. Process. Syst. 35 (2022), 25278\u201325294.","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_2_116_2","doi-asserted-by":"publisher","DOI":"10.1145\/3576915.3616588"},{"key":"e_1_3_2_117_2","first-page":"2187","volume-title":"Proceedings of the 32nd USENIX Security Symposium (USENIX Security\u201923)","author":"Shan Shawn","year":"2023","unstructured":"Shawn Shan, Jenna Cryan, Emily Wenger, Haitao Zheng, Rana Hanocka, and Ben Y. Zhao. 2023. Glaze: Protecting artists from style mimicry by Text-to-Image models. In Proceedings of the 32nd USENIX Security Symposium (USENIX Security\u201923). 2187\u20132204."},{"key":"e_1_3_2_118_2","article-title":"The woman worked as a babysitter: On biases in language generation","author":"Sheng Emily","year":"2019","unstructured":"Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. arXiv preprint arXiv:1909.01326 (2019).","journal-title":"arXiv preprint arXiv:1909.01326"},{"key":"e_1_3_2_119_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV57701.2024.00402"},{"key":"e_1_3_2_120_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.00586"},{"key":"e_1_3_2_121_2","doi-asserted-by":"publisher","DOI":"10.1145\/3658644.3670284"},{"key":"e_1_3_2_122_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0267976"},{"key":"e_1_3_2_123_2","first-page":"38274","article-title":"Memorization without overfitting: Analyzing the training dynamics of large language models","volume":"35","author":"Tirumala Kushal","year":"2022","unstructured":"Kushal Tirumala, Aram Markosyan, Luke Zettlemoyer, and Armen Aghajanyan. 2022. Memorization without overfitting: Analyzing the training dynamics of large language models. Advan. Neural Inf. Process. Syst. 35 (2022), 38274\u201338290.","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_2_124_2","article-title":"WaterBench: Towards holistic evaluation of watermarks for large language models","author":"Tu Shangqing","year":"2023","unstructured":"Shangqing Tu, Yuliang Sun, Yushi Bai, Jifan Yu, Lei Hou, and Juanzi Li. 2023. WaterBench: Towards holistic evaluation of watermarks for large language models. arXiv preprint arXiv:2311.07138 (2023).","journal-title":"arXiv preprint arXiv:2311.07138"},{"key":"e_1_3_2_125_2","article-title":"Intrinsic dimension estimation for robust detection of AI-generated texts","volume":"36","author":"Tulchinskii Eduard","year":"2024","unstructured":"Eduard Tulchinskii, Kristian Kuznetsov, Laida Kushnareva, Daniil Cherniavskii, Sergey Nikolenko, Evgeny Burnaev, Serguei Barannikov, and Irina Piontkovskaya. 2024. Intrinsic dimension estimation for robust detection of AI-generated texts. Advan. Neural Inf. Process. Syst. 36 (2024), 39257\u201339276.","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_2_126_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.findings-emnlp.172"},{"key":"e_1_3_2_127_2","first-page":"2116","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV\u201923)","author":"Le Thanh Van","year":"2023","unstructured":"Thanh Van Le, Hao Phung, Thuan Hoang Nguyen, Quan Dao, Ngoc N. Tran, and Anh Tran. 2023. Anti-DreamBooth: Protecting users from personalized text-to-image synthesis. In Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV\u201923). 2116\u20132127."},{"key":"e_1_3_2_128_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSTSP.2020.3002101"},{"key":"e_1_3_2_129_2","article-title":"Ghostbuster: Detecting text ghostwritten by large language models","author":"Verma Vivek","year":"2023","unstructured":"Vivek Verma, Eve Fleisig, Nicholas Tomlin, and Dan Klein. 2023. Ghostbuster: Detecting text ghostwritten by large language models. arXiv preprint arXiv:2305.15047 (2023).","journal-title":"arXiv preprint arXiv:2305.15047"},{"key":"e_1_3_2_130_2","article-title":"Survey on factuality in large language models: Knowledge, retrieval and domain-specificity","author":"Wang Cunxiang","year":"2023","unstructured":"Cunxiang Wang, Xiaoze Liu, Yuanhao Yue, Xiangru Tang, Tianhang Zhang, Cheng Jiayang, Yunzhi Yao, Wenyang Gao, Xuming Hu, Zehan Qi, et\u00a0al. 2023. Survey on factuality in large language models: Knowledge, retrieval and domain-specificity. arXiv preprint arXiv:2310.07521 (2023).","journal-title":"arXiv preprint arXiv:2310.07521"},{"key":"e_1_3_2_131_2","first-page":"1063","volume-title":"Proceedings of the 33rd USENIX Security Symposium (USENIX Security\u201924)","author":"Wang Haichen","year":"2024","unstructured":"Haichen Wang, Shuchao Pang, Zhigang Lu, Yihang Rao, Yongbin Zhou, and Minhui Xue. 2024. dp-promise: Differentially private diffusion probabilistic models for image synthesis. In Proceedings of the 33rd USENIX Security Symposium (USENIX Security\u201924). 1063\u20131080."},{"key":"e_1_3_2_132_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00661"},{"key":"e_1_3_2_133_2","doi-asserted-by":"publisher","DOI":"10.1109\/TBIOM.2024.3449849"},{"key":"e_1_3_2_134_2","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2023.3289392"},{"key":"e_1_3_2_135_2","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2022.3202047"},{"key":"e_1_3_2_136_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.02051"},{"key":"e_1_3_2_137_2","article-title":"DiffusionDB: A large-scale prompt gallery dataset for text-to-image generative models","author":"Wang Zijie J.","year":"2022","unstructured":"Zijie J. Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau. 2022. DiffusionDB: A large-scale prompt gallery dataset for text-to-image generative models. arXiv preprint arXiv:2210.14896 (2022).","journal-title":"arXiv preprint arXiv:2210.14896"},{"key":"e_1_3_2_138_2","article-title":"A reproducible extraction of training images from diffusion models","author":"Webster Ryan","year":"2023","unstructured":"Ryan Webster. 2023. A reproducible extraction of training images from diffusion models. arXiv preprint arXiv:2305.08694 (2023).","journal-title":"arXiv preprint arXiv:2305.08694"},{"key":"e_1_3_2_139_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.01153"},{"key":"e_1_3_2_140_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00475"},{"key":"e_1_3_2_141_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2022.06.039"},{"key":"e_1_3_2_142_2","unstructured":"Ruijia Wu Yuhang Wang Huafeng Shi Zhipeng Yu Yichao Wu and Ding Liang. 2023. Towards Prompt-robust Face Privacy Protection via Adversarial Decoupling Augmentation Framework. arxiv:2305.03980 [cs.CV]"},{"key":"e_1_3_2_143_2","first-page":"5841","volume-title":"Proceedings of the 33rd USENIX Security Symposium (USENIX Security\u201924)","author":"Wu Yixin","year":"2024","unstructured":"Yixin Wu, Rui Wen, Michael Backes, Pascal Berrang, Mathias Humbert, Yun Shen, and Yang Zhang. 2024. Quantifying privacy risks of prompts in visual prompt learning. In Proceedings of the 33rd USENIX Security Symposium (USENIX Security\u201924). 5841\u20135858."},{"key":"e_1_3_2_144_2","doi-asserted-by":"publisher","DOI":"10.1109\/APSIPAASC58517.2023.10317126"},{"key":"e_1_3_2_145_2","doi-asserted-by":"publisher","DOI":"10.1145\/3581783.3612448"},{"key":"e_1_3_2_146_2","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2024.3353265"},{"key":"e_1_3_2_147_2","article-title":"Walking in others\u2019 shoes: How perspective-taking guides large language models in reducing toxicity and bias","author":"Xu Rongwu","year":"2024","unstructured":"Rongwu Xu, Zi\u2019an Zhou, Tianwei Zhang, Zehan Qi, Su Yao, Ke Xu, Wei Xu, and Han Qiu. 2024. Walking in others\u2019 shoes: How perspective-taking guides large language models in reducing toxicity and bias. arXiv preprint arXiv:2407.15366 (2024).","journal-title":"arXiv preprint arXiv:2407.15366"},{"key":"e_1_3_2_148_2","first-page":"4534","volume-title":"Advances in Neural Information Processing Systems","author":"Yan Zhiyuan","year":"2023","unstructured":"Zhiyuan Yan, Yong Zhang, Xinhang Yuan, Siwei Lyu, and Baoyuan Wu. 2023. DeepfakeBench: A comprehensive benchmark of deepfake detection. In Advances in Neural Information Processing Systems, Vol. 36. Curran Associates, Inc., 4534\u20134565."},{"key":"e_1_3_2_149_2","article-title":"LeanDojo: Theorem proving with retrieval-augmented language models","volume":"36","author":"Yang Kaiyu","year":"2024","unstructured":"Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan J. Prenger, and Animashree Anandkumar. 2024. LeanDojo: Theorem proving with retrieval-augmented language models. Advan. Neural Inf. Process. Syst. 36 (2024).","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_2_150_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v36i4.20391"},{"key":"e_1_3_2_151_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01522"},{"key":"e_1_3_2_152_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.01156"},{"key":"e_1_3_2_153_2","doi-asserted-by":"publisher","DOI":"10.1109\/SP54263.2024.00209"},{"key":"e_1_3_2_154_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.future.2023.03.031"},{"key":"e_1_3_2_155_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01588"},{"key":"e_1_3_2_156_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00765"},{"key":"e_1_3_2_157_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01418"},{"key":"e_1_3_2_158_2","doi-asserted-by":"publisher","DOI":"10.1145\/3503161.3548110"},{"key":"e_1_3_2_159_2","article-title":"Securing deep generative models with universal adversarial signature","author":"Zeng Yu","year":"2023","unstructured":"Yu Zeng, Mo Zhou, Yuan Xue, and Vishal M. Patel. 2023. Securing deep generative models with universal adversarial signature. arXiv preprint arXiv:2305.16310 (2023).","journal-title":"arXiv preprint arXiv:2305.16310"},{"key":"e_1_3_2_160_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10462-021-10123-y"},{"key":"e_1_3_2_161_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW63382.2024.00182"},{"key":"e_1_3_2_162_2","first-page":"7113","volume-title":"Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)","author":"Zhang Hanning","year":"2024","unstructured":"Hanning Zhang, Shizhe Diao, Yong Lin, Yi Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, and Tong Zhang. 2024. R-Tuning: Instructing large language models to say \u201cI Don\u2019t Know.\u2019\u2019 In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 7113\u20137139."},{"key":"e_1_3_2_163_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.findings-naacl.29"},{"key":"e_1_3_2_164_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2024.127593"},{"key":"e_1_3_2_165_2","article-title":"A recipe for watermarking diffusion models","author":"Zhao Yunqing","year":"2023","unstructured":"Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Ngai-Man Cheung, and Min Lin. 2023. A recipe for watermarking diffusion models. arXiv preprint arXiv:2303.10137 (2023).","journal-title":"arXiv preprint arXiv:2303.10137"},{"key":"e_1_3_2_166_2","article-title":"Rich and poor texture contrast: A simple yet effective approach for AI-generated image detection","author":"Zhong Nan","year":"2023","unstructured":"Nan Zhong, Yiran Xu, Zhenxing Qian, and Xinpeng Zhang. 2023. Rich and poor texture contrast: A simple yet effective approach for AI-generated image detection. arXiv preprint arXiv:2311.12397 (2023).","journal-title":"arXiv preprint arXiv:2311.12397"},{"key":"e_1_3_2_167_2","article-title":"GenImage: A million-scale benchmark for detecting AI-generated image","volume":"36","author":"Zhu Mingjian","year":"2024","unstructured":"Mingjian Zhu, Hanting Chen, Qiangyu Yan, Xudong Huang, Guanyu Lin, Wei Li, Zhijun Tu, Hailin Hu, Jie Hu, and Yunhe Wang. 2024. GenImage: A million-scale benchmark for detecting AI-generated image. Advan. Neural Inf. Process. Syst. 36 (2024).","journal-title":"Advan. Neural Inf. Process. Syst."},{"key":"e_1_3_2_168_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2023.3262156"}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3703626","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3703626","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:09:42Z","timestamp":1750295382000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3703626"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,12,10]]},"references-count":167,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2025,4,30]]}},"alternative-id":["10.1145\/3703626"],"URL":"https:\/\/doi.org\/10.1145\/3703626","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,12,10]]},"assertion":[{"value":"2024-04-26","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-11-03","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-12-10","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}