Presentation Information
[1K3-GS-3a-05]Towards Abstraction of User Edits as Reusable Prompts
〇Michiaki Tatsubori1, Makoto Kogo1 (1. IBM Japan)
Keywords:
Prompt Engineering,Human-in-the-Loop,Content Generation Workflow
In GenAI-guided content production, users refine LLM outputs through direct editing, but these refinements are discarded, requiring repetitive effort. We propose a system that automatically abstracts phrase-level edits into reusable prompt policies via hierarchical clustering and appropriate abstraction ranking. Our approach represents edits as difference vectors between phrase embeddings, constructing an edge-weighted DAG. We introduce an Appropriate Abstraction (AA) scoring mechanism evaluating clusters based on inter-cluster distance, conciseness, uniqueness, and accuracy, then rank leaf-covering forests to suggest optimal abstraction levels. Unlike Human-in-the-Loop methods limited to simple feedback, our system captures complex modification patterns. Compared to Automated Prompt Engineering requiring extensive training data, our method works with partial modifications and fewer API calls, reducing computational costs. Experiments demonstrate successful extraction of rewriting policies like "increase formality and conciseness," applicable as enhanced prompts or post-generation filters.
