{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,27]],"date-time":"2026-03-27T16:02:40Z","timestamp":1774627360050,"version":"3.50.1"},"reference-count":54,"publisher":"Association for Computing Machinery (ACM)","issue":"5","license":[{"start":{"date-parts":[[2024,6,3]],"date-time":"2024-06-03T00:00:00Z","timestamp":1717372800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62302021, 62177003, and 62192731"],"award-info":[{"award-number":["62302021, 62177003, and 62192731"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Self-determined Research Funds of State Key Laboratory of Complex & Critical Software Environment","award":["SKLSDE-2023ZX-15"],"award-info":[{"award-number":["SKLSDE-2023ZX-15"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Softw. Eng. Methodol."],"published-print":{"date-parts":[[2024,6,30]]},"abstract":"<jats:p>Software developers frequently use code completion tools to accelerate software development by suggesting the following code elements. Researchers usually employ AutoRegressive (AR) decoders to complete code sequences in a left-to-right, token-by-token fashion. To improve the accuracy and efficiency of code completion, we argue that tokens within a code statement have the potential to be predicted concurrently. In this article, we first conduct an empirical study to analyze the dependency among the target tokens in line-level code completion. The results suggest that it is potentially practical to generate all statement tokens in parallel. To this end, we introduce SANAR, a simple and effective syntax-aware non-autoregressive model for line-level code completion. To further improve the quality of the generated code, we propose an adaptive and syntax-aware sampling strategy to boost the model\u2019s performance. The experimental results obtained from two widely used datasets indicate that our model outperforms state-of-the-art code completion approaches of similar model size by a considerable margin, and is faster than these models with up to 9\u00d7 speed-up. Moreover, the extensive results additionally demonstrate that the enhancements achieved by SANAR become even more pronounced with larger model sizes, highlighting their significance.<\/jats:p>","DOI":"10.1145\/3649594","type":"journal-article","created":{"date-parts":[[2024,2,26]],"date-time":"2024-02-26T12:35:42Z","timestamp":1708950942000},"page":"1-34","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":19,"title":["Non-Autoregressive Line-Level Code Completion"],"prefix":"10.1145","volume":"33","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3905-8133","authenticated-orcid":false,"given":"Fang","family":"Liu","sequence":"first","affiliation":[{"name":"School of Computer Science and Engineering, State Key Laboratory of Complex &amp; Critical Software Environment, Beihang University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0260-6404","authenticated-orcid":false,"given":"Zhiyi","family":"Fu","sequence":"additional","affiliation":[{"name":"School of Computer Science, Peking University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5828-0186","authenticated-orcid":false,"given":"Ge","family":"Li","sequence":"additional","affiliation":[{"name":"School of Computer Science, Peking University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1087-226X","authenticated-orcid":false,"given":"Zhi","family":"Jin","sequence":"additional","affiliation":[{"name":"School of Computer Science, Peking University, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3267-6801","authenticated-orcid":false,"given":"Hui","family":"Liu","sequence":"additional","affiliation":[{"name":"Beijing Institute of Technology, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0346-9555","authenticated-orcid":false,"given":"Yiyang","family":"Hao","sequence":"additional","affiliation":[{"name":"Silicon Heart Tech Co., Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2258-5893","authenticated-orcid":false,"given":"Li","family":"Zhang","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering, State Key Laboratory of Complex &amp; Critical Software Environment, Beihang University, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2024,6,3]]},"reference":[{"key":"e_1_3_2_2_2","unstructured":"aiXcoder. 2018. aiXcoder. https:\/\/www.aixcoder.com\/"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/MSR.2013.6624029"},{"key":"e_1_3_2_4_2","unstructured":"Mark Chen Jerry Tworek Heewoo Jun Qiming Yuan Henrique Pond\u00e9 de Oliveira Pinto Jared Kaplan Harrison Edwards Yuri Burda Nicholas Joseph Greg Brockman Alex Ray Raul Puri Gretchen Krueger Michael Petrov Heidy Khlaaf Girish Sastry Pamela Mishkin Brooke Chan Scott Gray Nick Ryder Mikhail Pavlov Alethea Power Lukasz Kaiser Mohammad Bavarian Clemens Winter Philippe Tillet Felipe Petroski Such Dave Cummings Matthias Plappert Fotios Chantzis Elizabeth Barnes Ariel Herbert-Voss William Hebgen Guss Alex Nichol Alex Paino Nikolas Tezak Jie Tang Igor Babuschkin Suchir Balaji Shantanu Jain William Saunders Christopher Hesse Andrew N. Carr Jan Leike Joshua Achiam Vedant Misra Evan Morikawa Alec Radford Matthew Knight Miles Brundage Mira Murati Katie Mayer Peter Welinder Bob McGrew Dario Amodei Sam McCandlish Ilya Sutskever and Wojciech Zaremba. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021)."},{"issue":"12","key":"e_1_3_2_5_2","first-page":"4818","article-title":"An empirical study on the usage of transformer models for code completion","volume":"48","author":"Ciniselli Matteo","year":"2021","unstructured":"Matteo Ciniselli, Nathan Cooper, Luca Pascarella, Antonio Mastropaolo, Emad Aghajani, Denys Poshyvanyk, Massimiliano Di Penta, and Gabriele Bavota. 2021. An empirical study on the usage of transformer models for code completion. IEEE Transactions on Software Engineering 48, 12 (2021), 4818\u20134837.","journal-title":"IEEE Transactions on Software Engineering"},{"key":"e_1_3_2_6_2","article-title":"Source code recommender systems: The practitioners\u2019 perspective","author":"Ciniselli Matteo","year":"2023","unstructured":"Matteo Ciniselli, Luca Pascarella, Emad Aghajani, Simone Scalabrino, Rocco Oliveto, and Gabriele Bavota. 2023. Source code recommender systems: The practitioners\u2019 perspective. arXiv preprint arXiv:2302.04098 (2023).","journal-title":"arXiv preprint arXiv:2302.04098"},{"key":"e_1_3_2_7_2","unstructured":"Copilot. 2021. Copilot. https:\/\/copilot.github.com\/"},{"key":"e_1_3_2_8_2","first-page":"1475","volume-title":"International Conference on Machine Learning","author":"Cvitkovic Milan","year":"2019","unstructured":"Milan Cvitkovic, Badal Singh, and Animashree Anandkumar. 2019. Open vocabulary learning on source code with a graph-structured cache. In International Conference on Machine Learning. PMLR, 1475\u20131485."},{"key":"e_1_3_2_9_2","article-title":"Knowledge neurons in pretrained transformers","author":"Dai Damai","year":"2021","unstructured":"Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2021. Knowledge neurons in pretrained transformers. arXiv preprint arXiv:2104.08696 (2021).","journal-title":"arXiv preprint arXiv:2104.08696"},{"key":"e_1_3_2_10_2","article-title":"CoCoMIC: Code completion by jointly modeling in-file and cross-file context","author":"Ding Yangruibo","year":"2022","unstructured":"Yangruibo Ding, Zijian Wang, Wasi Uddin Ahmad, Murali Krishna Ramanathan, Ramesh Nallapati, Parminder Bhatia, Dan Roth, and Bing Xiang. 2022. CoCoMIC: Code completion by jointly modeling in-file and cross-file context. arXiv preprint arXiv:2212.10007 (2022).","journal-title":"arXiv preprint arXiv:2212.10007"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.findings-emnlp.139"},{"key":"e_1_3_2_12_2","article-title":"InCoder: A generative model for code infilling and synthesis","volume":"2204","author":"Fried Daniel","year":"2022","unstructured":"Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida I. Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, and Mike Lewis. 2022. InCoder: A generative model for code infilling and synthesis. ArXiv abs\/2204.05999 (2022).","journal-title":"ArXiv"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1633"},{"key":"e_1_3_2_14_2","article-title":"MultiCoder: Multi-programming-lingual pre-training for low-resource code completion","author":"Gong Zi","year":"2022","unstructured":"Zi Gong, Yinpeng Guo, Pingyi Zhou, Cuiyun Gao, Yasheng Wang, and Zenglin Xu. 2022. MultiCoder: Multi-programming-lingual pre-training for low-resource code completion. arXiv preprint arXiv:2212.09666 (2022).","journal-title":"arXiv preprint arXiv:2212.09666"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/1143844.1143891"},{"key":"e_1_3_2_16_2","volume-title":"6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30\u2013May 3, 2018, Conference Track Proceedings","author":"Gu Jiatao","year":"2018","unstructured":"Jiatao Gu, James Bradbury, Caiming Xiong, Victor O. K. Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30\u2013May 3, 2018, Conference Track Proceedings. OpenReview.net."},{"key":"e_1_3_2_17_2","unstructured":"Jiatao Gu Changhan Wang and Junbo Zhao. 2019. Levenshtein transformer. In NeurIPS. 11179\u201311189."},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","unstructured":"Daya Guo Shuai Lu Nan Duan Yanlin Wang Ming Zhou and Jian Yin. 2022. UniXcoder: Unified cross-modal pre-training for code representation. (2022) 7212\u20137225. 10.18653\/v1\/2022.acl-long.499","DOI":"10.18653\/v1\/2022.acl-long.499"},{"key":"e_1_3_2_19_2","first-page":"7955","volume-title":"Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3\u20138, 2018, Montr\u00e9al, Canada","author":"He Tianyu","year":"2018","unstructured":"Tianyu He, Xu Tan, Yingce Xia, Di He, Tao Qin, Zhibo Chen, and Tie-Yan Liu. 2018. Layer-wise coordination between encoder and decoder for neural machine translation. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3\u20138, 2018, Montr\u00e9al, Canada, Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicol\u00f2 Cesa-Bianchi, and Roman Garnett (Eds.). 7955\u20137965."},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/3106237.3106290"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","unstructured":"Abram Hindle Earl T. Barr Zhendong Su Mark Gabel and Premkumar T. Devanbu. 2012. On the naturalness of software. (2012) 837\u2013847. 10.1109\/ICSE.2012.6227135","DOI":"10.1109\/ICSE.2012.6227135"},{"key":"e_1_3_2_22_2","article-title":"CodeFill: Multi-token code completion by jointly learning from structure and naming sequences","author":"Izadi Maliheh","year":"2022","unstructured":"Maliheh Izadi, Roberta Gismondi, and Georgios Gousios. 2022. CodeFill: Multi-token code completion by jointly learning from structure and naming sequences. arXiv preprint arXiv:2202.06689 (2022).","journal-title":"arXiv preprint arXiv:2202.06689"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1145\/3377811.3380342"},{"key":"e_1_3_2_24_2","series-title":"Proceedings of Machine Learning Research","first-page":"5144","volume-title":"Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13\u201318 July 2020, Virtual Event","volume":"119","author":"Kasai Jungo","year":"2020","unstructured":"Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu. 2020. Non-autoregressive machine translation with disentangled context transformer. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13\u201318 July 2020, Virtual Event(Proceedings of Machine Learning Research, Vol. 119). PMLR, 5144\u20135155."},{"key":"e_1_3_2_25_2","volume-title":"9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3\u20137, 2021","author":"Kasai Jungo","year":"2021","unstructured":"Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah A. Smith. 2021. Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3\u20137, 2021. OpenReview.net."},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE43902.2021.00026"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2018\/578"},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.1145\/3387904.3389261"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1145\/3324884.3416591"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/3568364.3568373"},{"key":"e_1_3_2_31_2","unstructured":"Shuai Lu Daya Guo Shuo Ren Junjie Huang Alexey Svyatkovskiy Ambrosio Blanco Colin B. Clement Dawn Drain Daxin Jiang Duyu Tang Ge Li Lidong Zhou Linjun Shou Long Zhou Michele Tufano Ming Gong Ming Zhou Nan Duan Neel Sundaresan Shao Kun Deng Shengyu Fu and Shujie Liu. 2021. CodeXGLUE: A machine learning benchmark dataset for code understanding and generation. arXiv preprint arXiv:2102.04664 (2021)."},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1437"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2022.119477"},{"key":"e_1_3_2_34_2","article-title":"CodeGen: An open large language model for code with multi-turn program synthesis","author":"Nijkamp Erik","year":"2022","unstructured":"Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2022. CodeGen: An open large language model for code with multi-turn program synthesis. arXiv preprint arXiv:2203.13474 (2022).","journal-title":"arXiv preprint arXiv:2203.13474"},{"key":"e_1_3_2_35_2","unstructured":"OpenAI. 2022. ChatGPT: Optimizing Language Models for Dialogue. https:\/\/openai.com\/blog\/chatgpt"},{"key":"e_1_3_2_36_2","first-page":"311","volume-title":"Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics","author":"Papineni Kishore","year":"2002","unstructured":"Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. 311\u2013318."},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.155"},{"key":"e_1_3_2_38_2","article-title":"Sequence level training with recurrent neural networks","author":"Ranzato Marc\u2019Aurelio","year":"2015","unstructured":"Marc\u2019Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732 (2015).","journal-title":"arXiv preprint arXiv:1511.06732"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1145\/2983990.2984041"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.acl-main.15"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10515-010-0064-x"},{"key":"e_1_3_2_42_2","first-page":"3011","volume-title":"Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8\u201314, 2019, Vancouver, BC, Canada","author":"Sun Zhiqing","year":"2019","unstructured":"Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, and Zhi-Hong Deng. 2019. Fast structured decoding for sequence models. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8\u201314, 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d\u2019Alch\u00e9-Buc, Emily B. Fox, and Roman Garnett (Eds.). 3011\u20133020."},{"key":"e_1_3_2_43_2","article-title":"Sequence to sequence learning with neural networks","volume":"27","author":"Sutskever Ilya","year":"2014","unstructured":"Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. Advances in Neural Information Processing Systems 27 (2014).","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3368089.3417058"},{"key":"e_1_3_2_45_2","doi-asserted-by":"crossref","first-page":"329","DOI":"10.1109\/MSR52588.2021.00045","volume-title":"2021 IEEE\/ACM 18th International Conference on Mining Software Repositories (MSR\u201921)","author":"Svyatkovskiy Alexey","year":"2021","unstructured":"Alexey Svyatkovskiy, Sebastian Lee, Anna Hadjitofi, Maik Riechert, Juliana Vicente Franco, and Miltiadis Allamanis. 2021. Fast and memory-efficient neural code completion. In 2021 IEEE\/ACM 18th International Conference on Mining Software Repositories (MSR\u201921). IEEE, 329\u2013340."},{"key":"e_1_3_2_46_2","unstructured":"Tabnine. 2018. Tabnine. https:\/\/www.tabnine.com\/"},{"key":"e_1_3_2_47_2","doi-asserted-by":"publisher","DOI":"10.1145\/2635868.2635875"},{"key":"e_1_3_2_48_2","first-page":"5998","volume-title":"Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4\u20139, 2017, Long Beach, CA, USA","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4\u20139, 2017, Long Beach, CA, USA, Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). 5998\u20136008."},{"key":"e_1_3_2_49_2","article-title":"Practitioners\u2019 expectations on code completion","author":"Wang Chaozheng","year":"2023","unstructured":"Chaozheng Wang, Junhao Hu, Cuiyun Gao, Yu Jin, Tao Xie, Hailiang Huang, Zhenyu Lei, and Yuetang Deng. 2023. Practitioners\u2019 expectations on code completion. arXiv preprint arXiv:2301.03846 (2023).","journal-title":"arXiv preprint arXiv:2301.03846"},{"key":"e_1_3_2_50_2","article-title":"Towards full-line code completion with neural language models","author":"Wang Wenhan","year":"2020","unstructured":"Wenhan Wang, Sijie Shen, Ge Li, and Zhi Jin. 2020. Towards full-line code completion with neural language models. arXiv preprint arXiv:2009.08603 (2020).","journal-title":"arXiv preprint arXiv:2009.08603"},{"key":"e_1_3_2_51_2","article-title":"Tree transformer: Integrating tree structures into self-attention","author":"Wang Yau-Shian","year":"2019","unstructured":"Yau-Shian Wang, Hung-Yi Lee, and Yun-Nung Chen. 2019. Tree transformer: Integrating tree structures into self-attention. arXiv preprint arXiv:1909.06639 (2019).","journal-title":"arXiv preprint arXiv:1909.06639"},{"issue":"1","key":"e_1_3_2_52_2","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1007\/s10515-023-00376-y","article-title":"AdaComplete: Improve DL-based code completion method\u2019s domain adaptability","volume":"30","author":"Wang Zejun","year":"2023","unstructured":"Zejun Wang, Fang Liu, Yiyang Hao, and Zhi Jin. 2023. AdaComplete: Improve DL-based code completion method\u2019s domain adaptability. Automated Software Engineering 30, 1 (2023), 11.","journal-title":"Automated Software Engineering"},{"key":"e_1_3_2_53_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/p19-1125"},{"key":"e_1_3_2_54_2","article-title":"RepoCoder: Repository-level code completion through iterative retrieval and generation","author":"Zhang Fengji","year":"2023","unstructured":"Fengji Zhang, Bei Chen, Yue Zhang, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, and Weizhu Chen. 2023. RepoCoder: Repository-level code completion through iterative retrieval and generation. arXiv preprint arXiv:2303.12570 (2023).","journal-title":"arXiv preprint arXiv:2303.12570"},{"key":"e_1_3_2_55_2","article-title":"What does transformer learn about source code?","author":"Zhang Kechi","year":"2022","unstructured":"Kechi Zhang, Ge Li, and Zhi Jin. 2022. What does transformer learn about source code? arXiv preprint arXiv:2207.08466 (2022).","journal-title":"arXiv preprint arXiv:2207.08466"}],"container-title":["ACM Transactions on Software Engineering and Methodology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3649594","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3649594","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T23:56:54Z","timestamp":1750291014000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3649594"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,3]]},"references-count":54,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2024,6,30]]}},"alternative-id":["10.1145\/3649594"],"URL":"https:\/\/doi.org\/10.1145\/3649594","relation":{},"ISSN":["1049-331X","1557-7392"],"issn-type":[{"value":"1049-331X","type":"print"},{"value":"1557-7392","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,6,3]]},"assertion":[{"value":"2023-03-27","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-02-08","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-06-03","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}