Presentation Information
[17p-WL1_301-11]High-speed machine learning by an onboard nonlinear analogue networks with reconfigurable input mechanisms
〇Noriko Hiroi1,2, Takato Chiba1, Taiki Matsunaga1, Tetsuya Gokan1, Rinya Tajima1, Andre Antezana1 (1.KAIT, 2.Keio Univ.)
Keywords:
contrastive local learning network,analog computer,FPGA
We implemented a pair of freely and cramped network on a single board, comprising an analogue electronic contrast local learning network based on self-adjusting transistor-based nonlinear resistive elements, as proposed by Dillavou et al.[1]. After verifying its operation, we constructed a 16-edge network with an FPGA as input and investigated its performance as a learning device. The results demonstrated that, compared to the best-performing software-based two-layer neural network trained on sine waves on an Intel Xeon CPU 2.20GHz, A100 GPU, and v5e1 TPU, equivalent accuracy was achieved with fewer than 1/187 epochs, 10-7 of the total training time, and 10-9 of the total energy consumption. The original system is known to exhibit representation drift phenomena similar to a hbman rain, future applications are being considered not merely for complex problems, but for tasks where analogue networks demonstrate particular strengths.
1. S. Dillavou, B.D. Beyer, M. Stern, A.J. Liu, M.Z. Miskin, and D.J. Durian, Machine learning without a processor: Emergent learning in a nonlinear analogue network, Proc. Natl. Acad. Sci. U.S.A. 121 (28) e2319718121, https://doi.org/10.1073/pnas.2319718121 (2024)
1. S. Dillavou, B.D. Beyer, M. Stern, A.J. Liu, M.Z. Miskin, and D.J. Durian, Machine learning without a processor: Emergent learning in a nonlinear analogue network, Proc. Natl. Acad. Sci. U.S.A. 121 (28) e2319718121, https://doi.org/10.1073/pnas.2319718121 (2024)
