Presentation Information
[O5-06]Bach and Bayes: Prediction in Noisy Musical Sequences
*Akanksha Gupta1, Alejandro Tabas2,3 (1. INS, INSERM, Aix-Marseille University, Marseille (France), 2. Perceptual Inference Group, Basque Center on Cognition, Brain and Language, San Sebastian (Spain), 3. Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig (Germany))
Keywords:
Predictive Processing,Bayesian Brain Hypothesis,Recurrent Neural Networks (RNNs),gated recurrent neural networks (GRUs)
Information from the external environment is often uncertain and ambiguous, posing a challenge for the brain to accurately infer the state of the world. According to the predictive processing framework, prior knowledge pertinent to inference is compressed into predictions about imminent future states. These predictions are combined with sensory inputs using Bayesian belief updating. While this approach is optimal for inferring latent states in certain stochastic systems, it may not be useful when applied to more complex systems such as music or language. In this work, we examine whether neural networks trained to infer the current latent state in a musical sequence also develop a capacity to predict what comes next.
To investigate this hypothesis, we utilized tokenized Bach compositions corrupted with noise as sensory inputs and gated recurrent neural networks (GRUs) to model neural circuits. The training procedure involved two stages: first, to infer the current token, and then, to optimize a linear readout for predictions of the next token to see if the predictions are encoded in the network's internal states. Furthermore, we benchmarked the network’s performance against an optimal Markovian model, which predicts the next token using only the current token. Our findings demonstrate that neural circuits fine-tuned for perceiving the current state can learn to predict future sensory input, suggesting that predictive capabilities emerge as a consequence of such optimization. This evidence strengthens the computational foundation of the predictive coding framework and offers insights into how biological systems may utilize prior knowledge to adaptively operate within uncertain environments.
To investigate this hypothesis, we utilized tokenized Bach compositions corrupted with noise as sensory inputs and gated recurrent neural networks (GRUs) to model neural circuits. The training procedure involved two stages: first, to infer the current token, and then, to optimize a linear readout for predictions of the next token to see if the predictions are encoded in the network's internal states. Furthermore, we benchmarked the network’s performance against an optimal Markovian model, which predicts the next token using only the current token. Our findings demonstrate that neural circuits fine-tuned for perceiving the current state can learn to predict future sensory input, suggesting that predictive capabilities emerge as a consequence of such optimization. This evidence strengthens the computational foundation of the predictive coding framework and offers insights into how biological systems may utilize prior knowledge to adaptively operate within uncertain environments.