Evolutionary learning in neural networks by heterosynaptic plasticity

Zedong Bi*, Ruiqi Fu, Guozhang Chen, Dongping Yang, Yu Zhou, Liang Tian*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Training biophysical neuron models provides insights into brain circuits’ organization and problem-solving capabilities. Traditional training methods like backpropagation face challenges with complex models due to instability and gradient issues. We explore evolutionary algorithms (EA) combined with heterosynaptic plasticity as a gradient-free alternative. Our EA models agents with distinct neuron information routes, evaluated via alternating gating, and guided by dopamine-driven plasticity. This model draws inspiration from various biological mechanisms, such as dopamine function, dendritic spine meta-plasticity, memory replay, and cooperative synaptic plasticity within dendritic neighborhoods. Neural networks trained with this model recapitulate brain-like dynamics during cognition. Our method effectively trains spiking and analog neural networks in both feedforward and recurrent architectures, it also achieves performance in tasks like MNIST classification and Atari games comparable to gradient-based methods. Overall, this research extends training approaches for biophysical neuron models, offering a robust alternative to traditional algorithms.
Original languageEnglish
Article number112340
Number of pages25
JournaliScience
Volume28
Issue number5
Early online date3 Apr 2025
DOIs
Publication statusPublished - 16 May 2025

User-Defined Keywords

  • Biophysical Neuron Model
  • Evolutionary Algorithms
  • Gradient-free Method
  • Heterosynaptic Plasticity
  • Neuroscience
  • Biological sciences
  • Biophysics

Fingerprint

Dive into the research topics of 'Evolutionary learning in neural networks by heterosynaptic plasticity'. Together they form a unique fingerprint.

Cite this