Technical paper
Systematic Bias in LLM-generated Relationship Advice Across User Gender and Religious Identity
Currently in development — my forthcoming research paper investigates gender bias in AI-generated content, using NLP classifiers and supervised machine learning to detect systematic linguistic differences in LLM outputs. Upon completion, it will be submitted to peer-reviewed journals and conferences.
Abstract: Large language models (LLMs) are increasingly being used as informal sources of advice by people in crisis, including those seeking guidance on relationship dilemmas. This study examines whether leading publicly available LLMs produce advice that differs in style and quality when identical prompts are adapted to reflect the user's gender or religion. Using a zero-shot classification framework to quantify the presence of eight theoretically grounded advice dimensions, including emotional support, practical problem-solving, moral evaluation, alarmism, de-escalation, and personal agency, ChatGPT and Gemini were tested across systematically varied identity conditions. Significant differences in advice style were found across both gender and religious identity groups, with the nature and magnitude of these differences varying by advice type and by model. These results suggest that LLMs do not dispense advice neutrally, and that users may receive systematically different guidance based on their perceived identity. The implications of these disparities for equity, user safety, and the responsible deployment of LLMs in emotionally sensitive contexts are discussed.
April 2026 - I have been invited to present my research as a speaker at Fordham University’s 4th annual research event: From Data to Discovery: Interdisciplinary Advances in AI and Data Science Workshop.
May 2026 - I had the privilege of presenting my research at Fordham University, alongside PhD candidates and faculty including Vice Dean of the Graduate School of Arts and Science.