Presentation Information

[4H5-OS-4c-05]Examining the Effects of Background Color on Trust Judgments in Vision-Language Models

〇Takumi Sugiura1, Bojian DU2, shohei matsugu2, Ryuji Watanabe3 (1. GMO Internet, Inc., 2. GMO Internet Group, Inc., 3. GMO Pepabo, Inc.)

Keywords:

Vision-Language Models,Bias and Fairness,Multimodal AI

In recent years, Vision-Language Models (VLMs) have been increasingly applied to diverse domains, including decision support. However, the influence of non-linguistic and visual contexts on model judgments has not been sufficiently investigated. In this study, we examined the effect of background color on reliability judgments by presenting trust-related and distrust-related words on either red or blue backgrounds and asking VLMs to assign trustworthiness scores. Experimental results showed that, across all tested models—GPT-4.1-mini, Claude Sonnet 4, and Gemini 2.0 Flash—trust-related words received significantly higher scores on blue backgrounds than on red backgrounds, indicating the presence of a cross-model color bias. In contrast, a significant difference for distrust-related words was observed only in GPT-4.1-mini, suggesting that color bias is more likely to manifest in trust judgments than in distrust judgments. Furthermore, we evaluated bias mitigation through prompt-based control and found that its effectiveness varied by model, indicating that prompt control is not a universally reliable countermeasure. These findings highlight the necessity of carefully designing and validating visual representations when deploying VLMs for decision-support applications.