

Workshop: Large Language Models
(LLMs) for Social Science Research
Large language models (LLMs) are fundamentally transforming social science research by enabling the high-speed processing of both qualitative and quantitative data. These models serve as analytical assistants, capable of handling labor-intensive tasks such as transcription, survey evaluation, and social media analysis. By acting as "auto-raters" or "LLM-as-a-judge," such models allow researchers to scale human judgment across vast datasets such as interview transcripts or political discourse that were previously too large for manual coding, significantly reducing the time and cost of traditional data analysis.
However, the integration of LLMs into academic workflows requires a critical focus on reliability and transparency. This workshop examines the practicalities of using LLMs as text coders and evaluators while addressing the inherent challenges of bias and hallucination. The session explores how to design robust frameworks to ensure that "LLM-as-a-judge" ratings remain consistent with human expertise and theoretical standards. Through a hands-on component, participants will learn to build a transparent pipeline that moves from raw data to validated, research-ready outputs, ensuring that AI-assisted methodologies enhance the integrity of research findings.
Jan. 25 3:05 PM - 4:25 PM