Understanding the computational difficulty of a binary-weight perceptron and the advantage of input sparseness

Zedong Bi, Changsong Zhou

Research output: Contribution to journalJournal articlepeer-review

3 Citations (Scopus)

Abstract

Limited precision of synaptic weights is a key aspect of both biological and hardware implementation of neural networks. To assign low-precise weights during learning is a non-trivial task, but may benefit from representing to-be-learned items using sparse code. However, the computational difficulty resulting from low weight precision and the advantage of sparse coding remain not fully understood. Here, we study a perceptron model, which associates binary (0 or 1) input patterns with desired outputs using binary (0 or 1) weights, modeling a single neuron receiving excitatory inputs. Two efficient perceptron solvers (SBPI and rBP) usually find solutions in the dense solution region. We studied this dense-region-approaching process through a decimation algorithm, where at every time step, marginal probabilities of unfixed weights were evaluated, then the most polarized weight was fixed at its preferred value. We compared the decimation-fixing order of weights with the dynamics of SBPI and rBP, and studied the structure of solution subspace where early-decimation-fixed weights take their fixed values. We found that in SBPI and rBP, most time steps are spent on determining values of late-decimation-fixed weights. This algorithmic difficult point may result from strong cross-correlation between late-decimation-fixed weights in, and is related to solution condensation in during decimation. Input sparseness reduces time steps that SBPI and rBP need to find solutions, due to reduction of cross-correlation between late-decimation-fixed weights. When contraint density increases across a transition point, the portion of weights whose values are difficult to determine sharply increases, signaling difficult accessibility of the dense solution region. We propose that computational difficulty of a binary-weight perceptron is related to a solution condensation process during approaching the dense solution region. Our work highlights the heterogeneity of learning dynamics of weights, which may help understand axonal pruning in brain development, and inspire more efficient algorithms to train neural networks.

Original languageEnglish
Article number035002
JournalJournal of Physics A: Mathematical and Theoretical
Volume53
Issue number3
DOIs
Publication statusPublished - 24 Jan 2020

Scopus Subject Areas

  • Statistical and Nonlinear Physics
  • Statistics and Probability
  • Modelling and Simulation
  • Mathematical Physics
  • Physics and Astronomy(all)

User-Defined Keywords

  • binary-weight perceptron
  • computational difficulty
  • cross-correlation
  • decimation
  • input sparsity
  • solution condensation

Fingerprint

Dive into the research topics of 'Understanding the computational difficulty of a binary-weight perceptron and the advantage of input sparseness'. Together they form a unique fingerprint.

Cite this