Presentation Information

[4Yin-A-39]Fairness Evaluation in Recruitment using LLMs: Analysis of Job Selection Bias through Simulated Personalities

〇Kosuke Kitahara1 (1. Recruit Co., Ltd.)

Keywords:

Bias and Fairness,LLMs,Fairness

The rapid proliferation of Large Language Models (LLMs) offers significant potential for automating recruitment, yet the requirement for fairness necessitates cautious implementation. This study investigates the challenges of LLM fairness by analyzing 40 simulated job descriptions containing gender-stereotypical vocabulary (20 sets each of agentic and communal wording). I examined the selection behavior of LLM-assigned personas: career agents and job seekers. The results revealed that simulated female job seekers showed a significant preference for communal vocabulary over agentic wording, while no such significant difference was observed for male personas. Conversely, the career agent persona recommended job descriptions fairly, regardless of the candidate's gender. These findings suggest that while LLMs can maintain neutrality in professional roles, they may mirror real-world societal biases when performing self-selection tasks. Consequently, replacing human self-selection with LLMs requires careful oversight to prevent underlying biases from skewing recruitment outcomes.