Presentation Information
[4Yin-B-19]Surface and Internal Pathways by Which Typos Break Chain-of-Thought Reasoning
〇Shion Fukuhata1, Yoshinobu Kano1 (1. Shizuoka University)
Keywords:
LLM,Interpretability,Chain-of-Thought Reasoning
Large language models (LLMs) are sensitive to typos, yet how such perturbations propagate through Chain-of-Thought (CoT) reasoning to cause errors remains unclear. We identify reasoning-relevant tokens via Attention-aware Layer-wise Relevance Propagation (AttnLRP) and apply targeted perturbations to analyze their effects on CoT reasoning. Experiments on five models across three benchmarks reveal two independent pathways from typos to errors: perturbations can alter answers by changing either the surface text or the key concepts in CoT reasoning.
