Presentation Information
[4Yin-A-46]Analysis of the Neglect-Zero Effect in Large Language Models
〇Jin Tanaka1, Daiki Matsuoka1,2, Ryoma Kumon1,2, Hitomi Yanaka1,2,3 (1. The University of Tokyo, 2. RIKEN, 3. Tohoku University)
Keywords:
large langurage models,inference,the neglect-zero effect,structural priming
LLMs superficially process natural language like humans, and there is significant interest in whether this mechanism resembles human cognitive processes. Against this backdrop, we focus on a human cognitive bias called the neglect-zero effect. It has been hypothesized that this effect underlies some types of linguistic inference that conventional theories cannot explain, and this hypothesis was experimentally verified in human subjects. Hence, the question arises whether the cognitive process of LLMs also involves the neglect-zero effect. To address this question, we compare the LLMs' process of the inferences believed to be driven by the neglect-zero effect and that of an inference not driven by it, using structural priming. The results suggest that the neglect-zero effect is absent in the LLMs used in this study.
