Presentation Information
[4Yin-A-05]Toward Explainable Unsupervised Learning: Interpreting Cluster Semantics with Local Large Language Models
〇Caio César Pinheiro de Moura1, Keisuke Niimi1, Yusuke Yamashina1, Kazuma Shiomi1, Yoshihiko Ichikawa1, Tsukasa Kamo2, Atsushi Kuboya2, Yuji Ayusawa2, Tatsuya Yamamoto2, Keigo Yoshida2 (1. Insight Edge, Inc., 2. SCSK Corporation)
Keywords:
XAI,LLM,Clustering
Interpreting clustering results requires not only statistical insight but also domain-specific knowledge, and such interpretation is often dependent on the analyst’s experience and subjective judgment. This reliance increases interpretation cost and limits objectivity, particularly in unsupervised learning settings. In this paper, we propose a framework for enhancing the interpretability of clustering results using a locally deployed large language model (LLM), enabling interpretation in environments where sensitive data cannot be shared externally. The proposed method inputs cluster-level statistical features together with domain-specific contextual information into the local LLM, which then generates natural language descriptions characterizing the meaning and properties of each cluster. Through experiments on electroencephalography (EEG) data, we demonstrate that a local LLM can effectively integrate numerical patterns with expert knowledge and provide coherent, informative interpretations of unsupervised clustering outcomes. The results indicate that the proposed approach offers a practical and privacy-preserving solution for explainable analysis in unsupervised learning tasks.
