Presentation Information
[2Yin-B-39]Proposal for Rule Explainability Evaluation Metrics and Methods for Enhancing Explainability Using Generative AI
〇Atsushi Machida1, Motonobu Uchikoshi1 (1. The Japan Research Institute, Limited)
Keywords:
Generative AI,LLM,Rule Generation,Explainability,Industrial Application
AI technology is widely utilized in rule-based classification tasks to reduce costs and improve generalization performance. This involves either combining rules with AI or using AI to generate and update rules. While rule generation via generative AI has gained attention in recent years, the explainability of generated rules poses a challenge in fields demanding high transparency and accountability, such as financial screening operations and medical diagnosis support. To address this challenge, this research proposes new evaluation metrics focused on the explainability of rules, and a method that utilizes generative AI to improve these metrics. We conducted comparative experiments with existing methods. The results confirmed that our approach can improve explainability metrics while maintaining high classification performance.
