skip to main content
research-article

A Bibliometric Review of Large Language Models Research from 2017 to 2023

Published: 21 October 2024 Publication History

Abstract

Large language models (LLMs), such as OpenAI's Generative Pre-trained Transformer (GPT), are a class of language models that have demonstrated outstanding performance across a range of natural language processing (NLP) tasks. LLMs have become a highly sought-after research area because of their ability to generate human-like language and their potential to revolutionize science and technology. In this study, we conduct bibliometric and discourse analyses of scholarly literature on LLMs. Synthesizing over 5,000 publications, this article serves as a roadmap for researchers, practitioners, and policymakers to navigate the current landscape of LLMs research. We present the research trends from 2017 to early 2023, identifying patterns in research paradigms and collaborations. We start with analyzing the core algorithm developments and NLP tasks that are fundamental in LLMs research. We then investigate the applications of LLMs in various fields and domains, including medicine, engineering, social science, and humanities. Our review also reveals the dynamic, fast-paced evolution of LLMs research. Overall, this article offers valuable insights into the current state, impact, and potential of LLMs research and its applications.

References

[1]
A. Abid, M. Farooqi, and J. Zou. 2021. Persistent Anti-Muslim Bias in Large Language Models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 298–306.
[2]
H. Aguinis, R. K. Gottfredson, and T. A. Wright. 2011. Best-Practice Recommendations for Estimating Interaction Effects Using Meta-Analysis. Journal of Organizational Behavior 32, 8 (2011), 1033–1043.
[3]
D. Araci. 2019. FinBERT: Financial sentiment analysis with pre-trained language models. arXiv:1908.10063. Retrieved from http://arxiv.org/abs/1908.10063
[4]
I. Beltagy, K. Lo, and A. Cohan. 2019. SciBERT: A Pretrained Language Model for Scientific Text. In Conference on Empirical Methods in Natural Language Processing.
[5]
E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 610–623.
[6]
C. Birkle, D. A. Pendlebury, J. Schnell, and J. Adams. 2020. Web of Science as a Data Source for Research on Scientific and Scholarly Activity. Quantitative Science Studies 1, 1 (2020), 363–376.
[7]
D. M. Blei and J. D. Lafferty. 2007. A Correlated Topic Model of Science. The Annals of Applied Statistics 1, 1 (2007), 17–35.
[8]
R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv:2108.07258.
[9]
T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, and D. Amodei. 2020. Language models are few-shot learners. arXiv:2005.14165. Retrieved from http://arxiv.org/abs/2005.14165
[10]
N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. Brown, D. Song, U. Erlingsson, A. Oprea, and C. Raffel. 2020. Extracting Training Data from Large Language Models. In USENIX Security Symposium. Retrieved from https://www.usenix.org/system/files/sec21-carlini-extracting.pdf
[11]
C. Chen. 2016. CiteSpace: A practical guide for mapping scientific literature. Retrieved from http://www.dobraca.com/wp-content/uploads/2019/03/CiteSpacePracticalGuide-Nova-Sample1-50pp.pdf
[12]
C. Chen, F. Ibekwe-SanJuan, and J. Hou. 2010. The Structure and Dynamics of Cocitation Clusters: A Multiple-Perspective Cocitation Analysis. Journal of the American Society for Information Science and Technology 61, 7 (2010), 1386–1409.
[13]
J. Devlin, M. -W. Chang, K. Lee, and K. Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805. Retrieved from http://arxiv.org/abs/1810.04805
[14]
J. Ding, B. Li, C. Xu, Y. Qiao, and L. Zhang. 2022. Diagnosing Crop Diseases Based on Domain-Adaptive Pre-Training BERT of Electronic Medical Records. Applied Intelligence 53 (2022), 15979–15992. DOI:
[15]
N. Donthu, S. Kumar, D. Mukherjee, N. Pandey, and W. M. Lim. 2021. How to Conduct a Bibliometric Analysis: An Overview and Guidelines. Journal of Business Research 133 (2021), 285–296.
[16]
R. Egger and J. Yu. 2022. A Topic Modeling Comparison Between lda, nmf, top2vec, and Bertopic to Demystify Twitter Posts. Frontiers in Sociology 7 (2022), 886498.
[17]
T. Eloundou, S. Manning, P. Mishkin, and D. Rock. 2023. GPTs are GPTs: An early look at the labor market impact potential of large language models. arXiv:2303.10130. http://arxiv.org/abs/2303.10130
[18]
L. Fan, W. Hua, L. Li, H. Ling, and Y. Zhang. 2023. Nphardeval: Dynamic benchmark on reasoning ability of large language models via complexity classes. arXiv:2312.14890.
[19]
L. Fan, W. Hua, X. Li, K. Zhu, M. Jin, L. Li, H. Ling, J. Chi, J. Wang, X. Ma, and Y. Zhang. 2024. NPHardEval4V: A dynamic reasoning benchmark of multimodal large language models. arXiv:2403.01777.
[20]
Leo (Lizhou) Fan. 2024. casmlab/LLMs\(\_\) bib: v1.0.0. DOI:
[21]
L. Fridman. 2023. Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the future of AI. YouTube. Retrieved March 25, 2023 from https://www.youtube.com/watch?v=L_Guz73e6fw
[22]
T. S. Goetze and D. Abramson. 2021. Bigger Isn’t Better: The Ethical and Scientific Vices of Extra-Large Datasets in Language Models. In Companion Publication of the 13th ACM Web Science Conference 2021 (WebSci’21). 69–75.
[23]
M. Grootendorst. 2022. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv:2203.05794. Retrieved from http://arxiv.org/abs/2203.05794
[24]
Y. Gu, R. Tinn, H. Cheng, M. Lucas, N. Usuyama, X. Liu, T. Naumann, J. Gao, and H. Poon. 2021. Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing. ACM Transactions on Computing for Healthcare 3, 1 (2021), 1–23.
[25]
B. Guembe, A. Azeta, S. Misra, V. C. Osamor, L. Fernandez-Sanz, and V. Pospelova. 2022. The Emerging Threat of Ai-driven Cyber Attacks: A Review. Applied Artificial Intelligence: AAI 36, 1 (2022), 2037254.
[26]
J. Hu and M. Sun. 2020. Generating major types of Chinese classical poetry in a uniformed framework. arXiv:2003.11528. Retrieved from http://arxiv.org/abs/2003.11528
[27]
Y. Hu, M. Hosseini, E. Skorupa Parolin, J. Osorio, L. Khan, P. Brandt, and V. D’Orazio. 2022. ConfliBERT: A Pre-trained Language Model for Political Conflict and Violence. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 5469–5482.
[28]
W. Hua, L. Fan, L. Li, K. Mei, J. Ji, Y. Ge, L. Hemphill, and Y. Zhang. 2023. War and peace (waragent): Large language model-based multi-agent simulation of world wars. arXiv:2311.17227.
[29]
M. Jagdish, D. U. Shah, V. Agarwal, G. B. Loganathan, A. Alqahtani, and S. A. Rahin. 2022. Identification of End-User Economical Relationship Graph Using Lightweight Blockchain-Based BERT Model. Computational Intelligence and Neuroscience 2022 (2022), 6546913.
[30]
D. Jin, Z. Jin, J. T. Zhou, and P. Szolovits. 2020. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 8018–8025.
[31]
M. Jin, Q. Yu, D. Shu, C. Zhang, L. Fan, W. Hua, S. Zhu, Y. Meng, Z. Wang, M. Du, and Y. Zhang. 2024. Health-LLM: Personalized retrieval-augmented disease prediction system. arXiv:2402.00746.
[32]
D. Jurafsky and J. H. Martin. 2023. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Retrieved from https://web.stanford.edu/jurafsky/slp3/ed3book_jan72023.pdf
[33]
R. Jurowetzki, D. Hain, J. Mateos-Garcia, and K. Stathoulopoulos. 2021. The Privatization of AI Research(-ers): Causes and potential consequences – From university-industry interaction to public research brain-drain? arXiv:2102.01648. Retrieved from http://arxiv.org/abs/2102.01648
[34]
K. Kawashima and S. Yamaguchi. 2021. Understanding Language Model from Questions in Social Studies for Students. In IEEE International Conference on Big Data (Big Data’21). 5932–5934.
[35]
Y. Khare, V. Bagal, M. Mathew, A. Devi, U. Deva Priyakumar, and C. V. Jawahar. 2021. MMBERT: Multimodal BERT Pretraining for Improved Medical VQA. In IEEE 18th International Symposium on Biomedical Imaging (ISBI’21). 1033–1036.
[36]
B. Kim, H. Kim, S.-W. Lee, G. Lee, D. Kwak, D. H. Jeon, S. Park, S. Kim, S. Kim, D. Seo, H. Lee, M. Jeong, S. Lee, M. Kim, S. H. Ko, S. Kim, T. Park, J. Kim, S. Kang, N. Sung, et al. 2021. What Changes Can Large-Scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-Scale Korean Generative Pretrained Transformers. In Conference on Empirical Methods in Natural Language Processing.
[37]
K. Krippendorff. 2018. Content Analysis: An Introduction to Its Methodology. SAGE Publications.
[38]
T. H. Kung, M. Cheatham, A. Medenilla, C. Sillos, L. De Leon, C. Elepaño, M. Madriaga, R. Aggabao, G. Diaz-Candido, J. Maningo, and V. Tseng. 2023. Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models. PLOS Digital Health 2, 2 (2023), e0000198.
[39]
H. Le, L. Vial, J. Frej, V. Segonne, M. Coavoux, B. Lecouteux, A. Allauzen, B. Crabbé, L. Besacier, and D. Schwab. 2019. FlauBERT: Unsupervised language model pre-training for French. arXiv:1912.05372.
[40]
L. Li, L. Fan, S. Atreja, and L. Hemphill. 2024. “HOT” ChatGPT: The Promise of ChatGPT in Detecting and Discriminating Hateful, Offensive, and Toxic Comments on Social Media. ACM Trans. Web 18, 2 (Mar. 2024), Article 30, 36 pages, 1–36. DOI:
[41]
M. Li, W. Li, F. Wang, X. Jia, and G. Rui. 2021. Applying BERT to Analyze Investor Sentiment in Stock Market. Neural Computing & Applications 33, 10 (2021), 4663–4676.
[42]
S. Lin, W. Hua, L. Li, C. Chang, L. Fan, J. Ji, H. Hua, M. Jin, J. Luo, and Y. Zhang. 2024. BattleAgent: Multi-modal dynamic emulation on historical battles to complement historical analysis. arXiv:2404.15532.
[43]
R. Liu, C. Jia, J. Wei, G. Xu, L. Wang, and S. Vosoughi. 2021. Mitigating Political Bias in Language Models through Reinforced Calibration. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 14857–14866.
[44]
X. Liu, H. Lu, and H. Li. 2022. Intelligent Generation Method of Emergency Plan for Hydraulic Engineering Based on Knowledge Graph – Take the South-to-North Water Diversion Project as an Example. LHB 108, 1 (2022), 2153629.
[45]
Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692.
[46]
J. W. Maarten Bosma. 2021. Introducing FLAN: More generalizable language models with instruction fine-tuning. Retrieved from https://ai.googleblog.com/2021/10/introducing-flan-more-generalizable.html
[47]
C. D. Manning. 2022. Human Language Understanding & Reasoning. Daedalus 151, 2 (2022), 127–138.
[48]
L. McInnes, J. Healy, N. Saul, and L. Großberger. 2018. UMAP: Uniform Manifold Approximation and Projection. Journal of Open Source Software 3, 29 (2018), 861. DOI:
[49]
A. I. Meta. 2023. Introducing LLaMA: A foundational, 65-billion-parameter large language model. Retrieved from https://ai.facebook.com/blog/large-language-model-llama-meta-ai/.
[50]
N. Mustakim, R. Rabu, G. Md. Mursalin, E. Hossain, O. Sharif, and M. M. Hoque. 2022. CUET-NLP@TamilNLP-ACL2022: Multi-Class Textual Emotion Detection from Social Media using Transformer. In Proceedings of the Second Workshop on Speech and Language Technologies for Dravidian Languages. 199–206.
[51]
D. Q. Nguyen, T. Vu, and A. T. Nguyen. 2020. BERTweet: A Pre-Trained Language Model for English Tweets. In Conference on Empirical Methods in Natural Language Processing.
[52]
Y. Niu, M. Adam, and H. Hussein. 2022. Connecting Urban Green Spaces with Children: A Scientometric Analysis Using CiteSpace. Land 11, 8 (2022), 1259.
[53]
Future of Life Institute. 2023. Pause giant AI experiments: An open letter. Retrieved from https://futureoflife.org/open-letter/pause-giant-ai-experiments/
[54]
J. Okerlund, E. Klasky, E. Middha, S. Kim, H. Rosenfeld, M. Kleinman, and S. Parthasarathy. 2022. What's in the Chatterbox? Large Language Models, Why They Matter, and What We Should Do About Them. Technical Report. University of Michigan. Retrieved from https://stpp.fordschool.umich.edu/sites/stpp/files/2022-05/large-language-models-TAP-2022-final-051622.pdf
[55]
M. Ollivier, A. Pareek, J. Dahmen, M. E. Kayaalp, P. W. Winkler, M. T. Hirschmann, and J. Karlsson. 2023. A Deeper Dive into ChatGPT: History, Use and Future Perspectives for Orthopaedic Research. Knee Surgery, Sports Traumatology, Arthroscopy: Official Journal of the ESSKA 31, 4 (2023), 1190–1192.
[56]
OpenAI. 2023. GPT-4 Technical Report. Technical Report. Retrieved from http://arxiv.org/abs/2303.08774
[57]
S. Pichai. 2023. An important next step on our AI journey. Retrieved from https://blog.google/technology/ai/bard-google-ai-search-updates/
[58]
Y. Qiu and Y. Jin. 2022. Engineering Document Summarization: A Bidirectional Language Model-Based Approach. Journal of Computing and Information Science in Engineering 22, 6 (2022), 061004.
[59]
A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. 2018. Improving language understanding by generative pre-training. Retrieved from https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
[60]
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. 2019. Language models are unsupervised multitask learners. Retrieved from https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
[61]
L. Rasmy, Y. Xiang, Z. Xie, C. Tao, and D. Zhi. 2021. Med-BERT: Pretrained Contextualized Embeddings on Large-Scale Structured Electronic Health Records for Disease Prediction. NPJ Digital Medicine 4, 1 (2021), 86.
[62]
N. Reimers and I. Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Conference on Empirical Methods in Natural Language Processing.
[63]
L. Reynolds and K. McDonell. 2021. Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–7.
[64]
S. Salam, L. Khan, A. El-Ghamry, P. Brandt, J. Holmes, V. D’Orazio, and J. Osorio. 2020. Automatic Event Coding Framework for Spanish Political News Articles. In 2020 IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). 246–253.
[65]
K. Sanderson. 2023. GPT-4 is here: What scientists think. Nature 615 (2023), 773. DOI:
[66]
J. Shang, T. Ma, C. Xiao, and J. Sun. 2019. Pre-Training of Graph Augmented Transformers for Medication Recommendation. In International Joint Conference on Artificial Intelligence.
[67]
Y. Shen, L. Heacock, J. Elias, K. D. Hentel, B. Reig, G. Shih, and L. Moy. 2023a. ChatGPT and Other Large Language Models Are Double-edged Swords. Radiology 307, 2 (2023), e230163.
[68]
Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang. 2023c. HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace. arXiv:2303.17580. Retrieved from http://arxiv.org/abs/2303.17580
[69]
Z. Shen, W. Ji, S. Yu, G. Cheng, Q. Yuan, Z. Han, H. Liu, and T. Yang. 2023b. Mapping the Knowledge of Traffic Collision Reconstruction: A Scientometric Analysis in CiteSpace, VOSviewer, and SciMAT. Science & Justice: Journal of the Forensic Science Society 63, 1 (2023), 19–37.
[70]
M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper, and B. Catanzaro. 2019. Megatron-LM: Training multi-billion parameter language models using model parallelism. arXiv:1909.08053. Retrieved from http://arxiv.org/abs/1909.08053
[72]
R. Tang, H. Yao, Z. Zhu, X. Sun, G. Hu, Y. Li, and G. Xie. 2021. Embedding Electronic Health Records to Learn BERT-Based Models for Diagnostic Decision Support. In IEEE 9th International Conference on Healthcare Informatics (ICHI’21). 311–319.
[73]
R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and T. B. Hashimoto. 2023. Alpaca: A strong, replicable instruction-following model. Retrieved from https://crfm.stanford.edu/2023/03/13/alpaca.html
[74]
R. Thoppilan, D. De Freitas, J. Hall, N. Shazeer, A. Kulshreshtha, H. -T. Cheng, A. Jin, T. Bos, L. Baker, Y. Du, Y. Li, H. Lee, H. S. Zheng, A. Ghafouri, M. Menegali, Y. Huang, M. Krikun, D. Lepikhin, J. Qin, and Q. Le. 2022. LaMDA: Language models for dialog applications. arXiv:2201.08239. Retrieved from: http://arxiv.org/abs/2201.08239
[75]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. 2017. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17), Vol. 30. Retrieved from https://proceedings.neurips.cc/paper/7181-attention-is-all
[76]
J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, E. H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean, and W. Fedus. 2022. Emergent Abilities of Large Language Models. Transactions on Machine Learning Research. Retrieved from http://arxiv.org/abs/2206.07682
[77]
L. Weidinger, J. Mellor, M. Rauh, C. Griffin, J. Uesato, P. S. Huang, M. Cheng, M. Glaese, B. Balle, A. Kasirzadeh, Z. Kenton, S. Brown, W. Hawkins, T. Stepleton, C. Biles, A. Birhane, J. Haas, L. Rimell, L. A. Hendricks, W. Isaac, S. Legassick, G. Irving, and I. Gabriel. 2021. Ethical and social risks of harm from Language Models. arXiv:2112.04359. Retrieved from http://arxiv.org/abs/2112.04359
[78]
M. M. Yamin, M. Ullah, H. Ullah, and B. Katt. 2021. Weaponized AI for Cyber Attacks. Journal of Information Security and Applications 57 (2021), 102722.
[79]
D. U. Yang, B. Kim, S. H. Lee, Y. H. Ahn, and H. Y. Kim. 2022. AutoDefect: Defect Text Classification in Residential Buildings Using a Multi-task Channel Attention Network. Sustainable Cities and Society 80 (2022), 103803. DOI:
[80]
Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Proceedings of the 33rd International Conference on Neural Information Processing Systems (NIPS’19). Vol. 32. Retrieved from https://proceedings.neurips.cc/paper/2019/hash/dc6a7e655d7e5840e66733e9ee67cc69-Abstract.html
[81]
H. Yu, L. Fan, and A. J. Gilliland. 2022. Disparities and Resilience: Analyzing Online Health Information Provision, Behaviors and Needs of LBGTQ+ Elders During COVID-19. BMC Public Health 22, 1 (2022), 2338.
[82]
H. Zhang, J. Xu, and J. Wang. 2019. Pretraining-Based Natural Language Generation for Text Summarization. In Proceedings of the 23rd Conference on Computational Natural Language Learning. 789–797.
[83]
Y. Zhang, R. Jin, and Z. -H. Zhou. 2010. Understanding Bag-of-Words Model: A Statistical Framework. International Journal of Machine Learning and Cybernetics 1, 1 (2010), 43–52.
[84]
W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, Y. Du, C. Yang, Y. Chen, Z. Chen, J. Jiang, R. Ren, Y. Li, X. Tang, Z. Liu, and J. -R. Wen. 2023. A Survey of Large Language Models. arXiv:2303.18223. Retrieved from http://arxiv.org/abs/2303.18223

Cited By

View all
  • (2025)Human vs. Machine: A Comparative Study on the Detection of AI-Generated ContentACM Transactions on Asian and Low-Resource Language Information Processing10.1145/370888924:2(1-26)Online publication date: 10-Feb-2025
  • (2025)KNowNEt:Guided Health Information Seeking from LLMs via Knowledge Graph IntegrationIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345636431:1(547-557)Online publication date: Jan-2025
  • (2025)A survey of emerging applications of large language models for problems in mechanics, product design, and manufacturingAdvanced Engineering Informatics10.1016/j.aei.2024.10306664(103066)Online publication date: Mar-2025
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Intelligent Systems and Technology
ACM Transactions on Intelligent Systems and Technology  Volume 15, Issue 5
October 2024
719 pages
EISSN:2157-6912
DOI:10.1145/3613688
  • Editor:
  • Huan Liu
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 October 2024
Online AM: 13 May 2024
Accepted: 15 April 2024
Revised: 15 April 2024
Received: 10 August 2023
Published in TIST Volume 15, Issue 5

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Bibliometric analysis
  2. large language models
  3. discourse analysis
  4. scholarly collaboration networks
  5. topic modeling

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,106
  • Downloads (Last 6 weeks)328
Reflects downloads up to 12 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Human vs. Machine: A Comparative Study on the Detection of AI-Generated ContentACM Transactions on Asian and Low-Resource Language Information Processing10.1145/370888924:2(1-26)Online publication date: 10-Feb-2025
  • (2025)KNowNEt:Guided Health Information Seeking from LLMs via Knowledge Graph IntegrationIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345636431:1(547-557)Online publication date: Jan-2025
  • (2025)A survey of emerging applications of large language models for problems in mechanics, product design, and manufacturingAdvanced Engineering Informatics10.1016/j.aei.2024.10306664(103066)Online publication date: Mar-2025
  • (2025)Integration of biomedical concepts for enhanced medical literature retrievalInternational Journal of Data Science and Analytics10.1007/s41060-025-00724-zOnline publication date: 3-Feb-2025
  • (2025)Balancing Efficiency and Quality in LLM-Based Entity Resolution on Structured DataSocial Networks Analysis and Mining10.1007/978-3-031-78548-1_21(278-293)Online publication date: 24-Jan-2025
  • (2025)Introduction to the Generative Artificial Intelligence SystemsGenerative Artificial Intelligence10.1007/978-3-031-65514-2_4(59-82)Online publication date: 8-Jan-2025
  • (2024)Fake News Detection Revisited: An Extensive Review of Theoretical Frameworks, Dataset Assessments, Model Constraints, and Forward-Looking Research AgendasTechnologies10.3390/technologies1211022212:11(222)Online publication date: 6-Nov-2024
  • (2024)Blockchain Consensus Mechanisms: A Bibliometric Analysis (2014–2024) Using VOSviewer and R BibliometrixInformation10.3390/info1510064415:10(644)Online publication date: 16-Oct-2024
  • (2024)Hardware Design and Verification with Large Language Models: A Scoping Review, Challenges, and Open IssuesElectronics10.3390/electronics1401012014:1(120)Online publication date: 30-Dec-2024
  • (2024)EXACT-Net: Framework for EHR-Guided Lung Tumor Auto-Segmentation for Non-Small Cell Lung Cancer RadiotherapyCancers10.3390/cancers1623409716:23(4097)Online publication date: 6-Dec-2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media