PHAS Colloquia

Trying out Large Language Models in Teaching Physics: The Amazingly powerful, the understandably mediocre, and the hilariously bad.

by Prof. ZhongZhou Chen

Tuesday, 5 November 2024 from to (America/Chicago)
Description
The rapid development of Generative AI, in particular large language models, promises huge potential in transforming all aspects of education. In this talk I will share my experience trying out LLMs in various tasks related to teaching introductory level physics. I will start by introducing the basic concepts of LLMs and introduce techniques such as prompt engineering and few-shot learning, followed by sharing three different applications of LLMs. First, LLMs can be used to efficiently and reliably create large numbers of isomorphic versions of physics problems following simple prompts, leading to new forms of flexible assessment. Second, LLMs can be used to write constructive feedback on student's response towards conceptual questions, after learning from physics education research and several examples. Third, LLMs can achieve human level accuracy in grading of students' problem-solving process according to binary rubric items, but only when the rubric items are accompanied by additional explanations. In addition, LLMs could be used to identify potentially problematic grading, and write personalized feedback based on each response. Last but not least, I will share latest attempts in "tricking" LLMs to commit logical fallacies in introductory level physics and discuss the possibility of developing "chatGPT proof" assessment questions.
Material