Presentation Information

[5L3-OS-6b-02]Enhancing Interpretability of Concept Drift Analysis with Large Language Models

〇Keita Sakuma1, Ryuta Matsuno1, Tatsuaki Nemoto1, Masakazu Hirokawa1 (1. NEC Corporation)

Keywords:

Concept Drift,Visualization,Large Language Model,Decision Tree,Exploratory Data Analysis

Concept drift, where the relationship between features and the target changes over time, is a critical challenge requiring understanding of where and why changes occur. To address this, our prior work, CDST-Viz, uses a decision tree to segment the feature space and visualize drift patterns as a treemap; however, complex trees are hard to grasp and require both statistical and domain expertise to interpret. We propose CDST-Narrator, an exploratory analysis tool that integrates large language model (LLM)-based natural language explanations into CDST-Viz. It provides factual summarization and hypothesis generation based on the LLM's world knowledge through three tasks: overall summary, cause analysis, and node-level explanation. Case studies on real-world datasets confirmed it extracts findings consistent with prior work while providing novel insights.