Presentation Information
[2Yin-A-21]Exact Algorithms for Learning Minimal Consistent Boolean Formulas from Examples
〇Kawata Yusuke1, HIROKI ARIMURA1 (1. Hokkaido University)
Keywords:
eXplainable AI,Learning Boolean Functions,Combinatorial Optimization,Enumeration,Parameterized complexity
This research studies learning of size-t Boolean formulas from the view of eXplainable AI (XAI). Specifically, we consider the "minimum consistent hypothesis finding" problem: finding a formula of size at most t that perfectly fits given labeled examples. This task corresponds to proper learning from any distribution in the realizable setting in learning theory, where the goal is to output a model within the same hypothesis class as the target concept.
To overcome the computational intractability of this problem, we propose two exact exponential time algorithms. The first method is based on enumeration of all formulas within the size constraint. The second method is a dynamic programming approach that recursively verifies the existence of a consistent formula by adapting Razborov’s "formula size game" - originally used for proving lower bounds - into a machine learning framework.
We show that while both are exponential in the worst case, they have diffrent bahavior in parameterized complexity when they are parameterized by the size t and the number of examples m, respectively.
Finally, we discuss extensions to the agnostic setting and improper learning to better handle noisy, real-world XAI applications.
To overcome the computational intractability of this problem, we propose two exact exponential time algorithms. The first method is based on enumeration of all formulas within the size constraint. The second method is a dynamic programming approach that recursively verifies the existence of a consistent formula by adapting Razborov’s "formula size game" - originally used for proving lower bounds - into a machine learning framework.
We show that while both are exponential in the worst case, they have diffrent bahavior in parameterized complexity when they are parameterized by the size t and the number of examples m, respectively.
Finally, we discuss extensions to the agnostic setting and improper learning to better handle noisy, real-world XAI applications.
