Presentation Information

[4Yin-A-01]The Effect of LLM Response Styles on Users' Misinformation Detection Sensitivity

Gakuei SUGIURA1, Yuya TAKADA1, Kira FURUKAWA1, Tatsunosuke YOSHIME1, 〇ZIXIN CUI1 (1. University of Tsukuba)

Keywords:

LLM,Response Styles,Trust,Misinformation sensitivity

As large language models (LLMs) are increasingly integrated into society, concerns have emerged regarding the risk of misinformation acceptance driven by user overreliance. This study experimentally examined the effects of LLM response styles on users’ sensitivity to misinformation. The results showed that the trust-inducing response style did not produce a significant difference in subjective trust ratings; however, compared with the neutral response style, it led to significantly higher accuracy ratings for misinformation and significantly lower misinformation detection ability. Furthermore, mediation analysis revealed that the trust-inducing response style increased the perceived persuasiveness of misinformation, thereby biasing users’ decision criteria and lowering the threshold for misinformation detection. Independent of trust changes induced by response style manipulation, higher levels of cognitive trust itself were also found to relax decision criteria, increasing users’ tolerance toward misinformation.