Presentation Information

[1Yin-A-04]Effects of Symbolic Syntactic Manipulation on Content Filtering in Large Language ModelsAn Experimental Observation of the Effects of Symbol Insertion Patterns on Semantic Chaining

〇Masato Suzuki1 (1. West Los Angeles College)

Keywords:

Large Language Models,Tokenization,Attention Mechanisms,Syntactic Perturbation,Content Control

This study presents an observational analysis of vulnerabilities in content control and semantic understanding in large language models (LLMs) from the perspective of syntactic manipulation. Specifically, we report cases in which a simple syntactic rule—periodic insertion of full-width symbols—affects tokenization and attention mechanisms, resulting in outputs that include topics typically subject to suppression. A key characteristic of this method is that it is reproducible using plain text input alone, without adversarial training or external tools. The observations suggest that periodic symbol insertion disrupts lexical units and interferes with the tracking of semantic chains, thereby influencing the behavior of content control algorithms. This work provides foundational insights for re-examining the relationship between semantic understanding and content control in LLMs, and contributes to future discussions on safety-oriented design and evaluation methodologies.