Presentation Information
[3O2-IS-3-02]Narrative Identity in LLM-Based Multi-Agent Research Collaboration:Simulation Modeling and Numerical Study
〇CHAO LI1, Tianxiang Yang2, Masayuki Goto1 (1. Waseda University, 2. Keio University)
regular
Keywords:
LLM,Multi-agent System,narrative identity,co-research,simulation analysis
Recently, large language models (LLMs) have become a core backbone of multi-agent systems for complex tasks, particularly in scientific research. However, most existing frameworks emphasize short-horizon objectives, leaving long-term intention, coherence, and recovery from setbacks under-specified. In this paper, we present a controlled numerical study of narrative identity in LLM-based multi-agent research collaboration. We construct a minimal simulation environment in which two agents jointly develop a research project under limited resources and stochastic external shocks. Narrative identity is explicitly modeled as a shared, mutable narrative state and incorporated into agent policies via prompt-level conditioning, with a scalar parameter controlling the strength of narrative bias toward long-term coherence and repair-oriented behavior. This framework enables systematic analysis of collaboration dynamics, including trust evolution, affect accumulation, and post-shock recovery. Ablation results show that narrative-conditioned agents recover research quality and social trust more rapidly than narrative-agnostic baselines, without relying on explicit reward optimization. These findings suggest that narrative identity serves as an effective coordination signal for stabilizing long-horizon collaboration in LLM-based multi-agent systems.
