Presentation Information
[5Yin-A-32]Improvement of Computational Efficiency and Explainability by Sparse Binary Neural Networks
〇Taketo Shibata1, Kunikazu Kobayashi2 (1. Graduate School of Aichi Prefectural University, 2. Aichi Prefectural University)
Keywords:
neural network,Extreme learning machine,Binarization,Sparsification,XAI
This study proposes a novel three-layer neural network architecture that integrates full binarization and sparsification into Extreme Learning Machines (ELMs). Our goal is to address the high computational efficiency and transparency issues inherent in deep learning, enabling efficient deployment on edge devices. Unlike conventional Multi-Layer Perceptrons (MLPs) and Binary Neural Networks (BNNs) that rely on iterative backpropagation, our approach utilizes a non-iterative correlation learning algorithm between hidden layer outputs and class labels. By unifying the model with sparse binary {0, 1} representations, we eliminate floating-point multiplications, replacing matrix operations with integer additions. Experimental results on the CIFAR-10 dataset demonstrate that our method reduces training time to approximately one-third and compresses model size to 1/190th compared to full-precision MLPs. Furthermore, it achieves a high accuracy of 93.42%, rivaling conventional models. These findings suggest that our approach effectively resolves the trade-off between computational efficiency and classification accuracy.
