default search action
LLM@AIED 2023: Tokyo, Japan
- Steven Moore, John C. Stamper, Richard Jiarui Tong, Chen Cao, Zitao Liu, Xiangen Hu, Yu Lu, Joleen Liang, Hassan Khosravi, Paul Denny, Anjali Singh, Christopher Brooks:
Proceedings of the Workshop on Empowering Education with LLMs - the Next-Gen Interface and Content Generation 2023 co-located with 24th International Conference on Artificial Intelligence in Education (AIED 2023), Tokyo, Japan, July 7, 2023. CEUR Workshop Proceedings 3487, CEUR-WS.org 2023 - Daniel Leiker, Sara Finnigan, Ashley Ricker Gyllen, Mutlu Cukurova:
Prototyping the use of Large Language Models (LLMs) for Adult Learning Content Creation at Scale. 3-7 - Shouvik Ahmed Antu, Haiyan Chen, Cindy K. Richards:
Using LLM (Large Language Model) to Improve Efficiency in Literature Review for Undergraduate Research. 8-16 - Gautam Yadav, Ying-Jui Tseng, Xiaolin Ni:
Contextualizing Problems to Student Interests at Scale in Intelligent Tutoring System Using Large Language Models. 17-25 - Shashank Sonkar, Richard G. Baraniuk:
Deduction under Perturbed Evidence: Probing Student Simulation (Knowledge Tracing) Capabilities of Large Language Models. 26-33 - Katie Bainbridge, Candace A. Walkington, Armon Ibrahim, Iris Zhong, Debshila Basu Mallick, Julianna Washington, Richard G. Baraniuk:
A Case Study using Large Language Models to Generate Metadata for Math Questions. 34-42 - Md Rayhan Kabir, Fuhua Oscar Lin:
An LLM-Powered Adaptive Practicing System. 43-52 - Muntasir Hoq, Yang Shi, Juho Leinonen, Damilola Babalola, Collin F. Lynch, Bita Akram:
Detecting ChatGPT-Generated Code in a CS1 Course. 53-63 - Qianou Christina Ma, Sherry Tongshuang Wu, Ken Koedinger:
Is AI the better programming partner? Human-Human Pair Programming vs. Human-AI pAIr Programming. 64-77 - Benjamin D. Nye, Dillon Mee, Mark G. Core:
Generative Large Language Models for Dialog-Based Tutoring: An Early Consideration of Opportunities and Concerns. 78-88 - Matyás Bohácek:
The Unseen A+ Student: Evaluating the Performance and Detectability of Large Language Models in High School Assignments. 89-100 - Bor-Chen Kuo, Frederic T. Y. Chang, Zong-En Bai:
Leveraging LLMs for Adaptive Testing and Learning in Taiwan Adaptive Learning Platform (TALP). 101-110 - Andrew M. Olney:
Generating Multiple Choice Questions from a Textbook: LLMs Match Human Performance on Most Metrics. 111-128 - Alex Goslen, Yeo Jin Kim, Jonathan P. Rowe, James C. Lester:
Language Modeling for Plan Generation in Game-Base Learning Environments. 129-138 - Pragnya Sridhar, Aidan Doyle, Arav Agarwal, Christopher Bogart, Jaromír Savelka, Majd Sakr:
Harnessing LLMs in Curricular Design: Using GPT-4 to Support Authoring of Learning Objectives. 139-150 - Sai Satish Gattupalli, Will Lee, Danielle Allessio, Danielle Crabtree, Ivon Arroyo, Beverly P. Woolf:
Exploring Pre-Service Teachers' Perceptions of Large Language Models-Generated Hints in Online Mathematics Learning. 151-162 - Kole Norberg, Husni Almoubayyed, Stephen E. Fancsali, Logan De Ley, Kyle Weldon, April Murphy, Steven Ritter:
Rewriting Math Word Problems with Large Language Models. 163-172 - Andrew Caines, Luca Benedetto, Shiva Taslimipoor, Christopher Davis, Yuan Gao, Øistein E. Andersen, Zheng Yuan, Mark Elliott, Russell Moore, Christopher Bryant, Marek Rei, Helen Yannakoudakis, Andrew Mullooly, Diane Nicholls, Paula Buttery:
On the Application of Large Language Models for Language Teaching and Assessment Technology. 173-197
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.