Statistics Colloquium: Kylie Anglin, Assistant Professor, University of Connecticut
Title: Improving the Alignment between Human and Large Language Model Classifications: An Empirical Assessment of Prompt Engineering for Construct Identification
Presented by Kylie Anglin, Assistant Professor, Research Methods, Measurement, and Evaluation, Department of Educational Psychology, University of Connecticut
Date: Friday, November 7, 2025, 1:00 PM, AUST 344
Link: WebEx Link
Coffee will be available at 12:30 PM in the Noether Lounge (AUST 326)
Abstract: Due to their vast pre-training data, large language models (LLMs) demonstrate strong text classification performance. However, LLM output—here, the category assigned to a text—depends heavily on the wording of the prompt. While literature on prompt engineering is expanding, few studies focus on classification tasks, and even fewer address domains such as psychology, where constructs have precise, theory-driven definitions that may not be well represented in pre-training data. This presentation introduces an empirical framework for optimizing LLM performance in classifying psychological constructs using prompt engineering. Five strategies (codebook-guided empirical prompt selection, automatic prompt engineering, persona prompting, chain-of-thought reasoning, and explanatory prompting) are systematically evaluated under zero- and few-shot settings across three constructs, four experimental samples, and two LLMs (GPT-4 and LLaMA 3.3). Analyses encompass thousands of performance evaluations, generating insights into robust patterns of performance and informing evidence-based recommendations for applied text classification with LLMs.
Bio: KYLIE ANGLIN is an assistant professor in research methods, measurement, and evaluation at the University of Connecticut. Her research improves the quality and depth of educational program evaluations, most often through the rigorous application of artificial intelligence (AI).