Presentation Information
[3Yin-A-04]Four-Layer External Structuring for LLM InteractionsA Zero-Infrastructure Dialogue Control Architecture
〇nobuaki hori1 (1. Independent Researcher)
Keywords:
Large Language Models,Dialogue Structuring,Cognitive Load,Human–AI Interaction,Dialogue Control Architecture
In free-form interactions with Large Language Models (LLMs), users must manage not only task-related content but also dialogue structure, including information ordering, ambiguity resolution, and consistency maintenance. This dual-task burden introduces extraneous cognitive load, often resulting in rework, instability, and reduced reproducibility. This paper proposes a zero-infrastructure dialogue control architecture that enables structured and stable LLM interactions without additional computational resources or model retraining. The framework externalizes structural management through four layers—State, Filter, Guard, and Evaluation—relocating dialogue control outside the model while preserving generative capability. To operationalize this approach across task characteristics, we define a two-axis design space based on inference autonomy and structural fixity, deriving three structural presets: Static Stage-Based (SD1), Dynamic Exploratory (DX1), and Thought-Design (TD1). A small-scale exploratory user study (N=5) comparing these presets with free-form dialogue indicates reduced rework, lower perceived cognitive load, improved expectation alignment, and increased structural stability.
