Does the Human Brain Use Backpropagation and Gradient Descent for Learning?
Created by Dall-E

Does the Human Brain Use Backpropagation and Gradient Descent for Learning?

Recent research has increasingly explored whether the human brain might utilize learning mechanisms analogous to backpropagation and gradient descent—the cornerstone algorithms for training artificial neural networks. In artificial systems, backpropagation works by propagating error signals backward through the layers of a network, enabling the computation of gradients that guide precise weight adjustments through gradient descent. This process has been highly effective for learning complex mappings from inputs to outputs in machine learning models. However, the biological brain operates under different constraints. Neurons communicate via electrical impulses and synaptic activity, and their plasticity is governed by localized mechanisms, such as spike-timing-dependent plasticity, which challenges the notion that the brain employs a centralized, layer-by-layer error propagation strategy identical to backpropagation.

The theoretical frameworks that attempt to bridge this gap propose that the brain might implement analogous processes through more biologically plausible mechanisms. One influential proposal is predictive coding, a framework in which neurons continually generate predictions about incoming sensory data and update their synaptic connections based on the discrepancies between these predictions and the actual inputs. This prediction error, which is locally computed, plays a role similar to the error signals used in backpropagation. Researchers such as Whittington and Bogacz (2017) have demonstrated that neural networks based on predictive coding, which rely on local Hebbian learning rules, could approximate a gradient descent process even without the explicit backward flow of error signals found in artificial systems.

Another promising avenue is the concept of feedback alignment, which challenges the traditional requirement of symmetric forward and backward weights. In artificial neural networks, exact weight symmetry is essential for standard backpropagation; however, the brain appears to operate without such strict symmetry. Feedback alignment suggests that even random or loosely correlated feedback signals can sufficiently guide learning, enabling networks to adjust their synaptic weights effectively. This idea not only provides a more biologically realistic alternative but also hints at how the brain might achieve error-driven learning despite the apparent lack of precise backward signal propagation.

Moreover, models that focus on dendritic computation have provided additional insights into how biological neurons might perform complex error correction locally. The intricate structure of neurons, particularly the differentiation between apical and basal dendrites, offers a potential mechanism for segregating feedforward sensory input from feedback error signals. This compartmentalization could allow neurons to compute local error information and update synaptic weights accordingly. Research by Guerguiev, Lillicrap, and Richards (2017) explores how such segregated dendritic processing might serve as a substrate for a learning mechanism that approximates the gradient-based updates seen in artificial neural networks.

Equilibrium propagation is another compelling framework that offers a route to bridging the gap between biological processes and computational algorithms. Proposed by Scellier and Bengio (2017), this approach allows a network to settle into an equilibrium state under normal conditions and then into a slightly perturbed state when a small error is introduced. The difference between these states can be used to update synaptic weights in a manner that closely resembles gradient descent. This method circumvents some of the biological implausibilities associated with standard backpropagation, providing a model that aligns more closely with the continuous and dynamic nature of neuronal activity.

Despite these innovative theoretical models, direct experimental evidence that the human brain employs a backpropagation-like mechanism remains limited. The majority of research indicates that learning in the brain is predominantly local and heavily dependent on complex temporal dynamics. Synaptic modifications occur over various time scales and are influenced by a host of factors, from neuromodulatory signals to intrinsic neuronal properties. This local, context-dependent nature of synaptic plasticity suggests that while the brain may perform error-driven learning, it does so through mechanisms that diverge significantly from the algorithmic precision of artificial neural network training.

In conclusion, although the human brain does not appear to use backpropagation and gradient descent in the strict sense applied in artificial systems, it may employ analogous strategies that achieve similar outcomes. Models based on predictive coding, feedback alignment, dendritic computation, and equilibrium propagation provide compelling frameworks that illuminate possible pathways for error-driven learning in the brain. These insights not only deepen our understanding of neural processes but also inspire the development of more biologically plausible learning algorithms in artificial intelligence. The ongoing exploration of these models continues to challenge our assumptions about learning, revealing the remarkable adaptability and complexity of the brain.

References

  • Whittington, J. C. R., & Bogacz, R. (2017). An approximation of the error backpropagation algorithm in a predictive coding network with local Hebbian synaptic plasticity. Neural Computation, 29, 1229–1262.
  • Guerguiev, J., Lillicrap, T. P., & Richards, B. A. (2017). Towards deep learning with segregated dendrites. arXiv:1610.00161v3
  • Scellier, B., & Bengio, Y. (2017). Equilibrium propagation: Bridging the gap between energy-based models and backpropagation. Frontiers in Computational Neuroscience, 11, 24.
  • Lillicrap, T. P., Cownden, D., Tweed, D. B., Akerman, C. J. (2016). Random synaptic feedback weights support error backpropagation for deep learning. Nature Communications, 7, 13276.

To view or add a comment, sign in

More articles by Taner Akdeniz

Insights from the community

Others also viewed

Explore topics