Skip to content

Biological Neural Networks

Jason Cox edited this page Dec 18, 2024 · 1 revision

Biological Neural Networks

Modern Digital NNs train using back-propogation. However, biological systems train by means of synaptic plasticity.

image

How Do Biological Neurons Train?

Biological neurons adjust their synaptic connections using mechanisms fundamentally different from backpropagation. These processes, collectively referred to as synaptic plasticity, govern how neurons learn and adapt. Below are the key mechanisms:


1. Hebbian Learning ("Fire Together, Wire Together")

  • Based on Donald Hebb's principle: when a presynaptic neuron consistently activates a postsynaptic neuron, the connection strengthens.
  • Mechanism:
    • Long-Term Potentiation (LTP): Repeated and simultaneous activation increases synaptic strength by:
      • Inserting more neurotransmitter receptors (e.g., AMPA receptors) into the postsynaptic membrane.
      • Increasing neurotransmitter release from the presynaptic neuron.
    • Long-Term Depression (LTD): Weakens synapses when neurons fire out of sync.

2. Spike-Timing-Dependent Plasticity (STDP)

  • The precise timing of spikes (action potentials) determines whether synaptic strength increases or decreases:
    • If the presynaptic neuron fires just before the postsynaptic neuron, the connection strengthens (LTP).
    • If the presynaptic neuron fires after the postsynaptic neuron, the connection weakens (LTD).
  • This mechanism refines Hebbian learning and accounts for temporal relationships in neural activity.

3. Homeostatic Plasticity

  • To prevent runaway excitation or inhibition, neurons regulate their activity to maintain a stable average firing rate:
    • If a neuron is overactive, it reduces synaptic strength or receptor sensitivity.
    • If underactive, it increases synaptic strength.
  • Ensures balance and prevents instability in neural networks.

4. Neuromodulation

  • Learning is influenced by neuromodulators like dopamine, serotonin, and acetylcholine, which signal the relevance or reward of activities.
  • Example:
    • Dopamine reinforces connections involved in rewarding actions, aligning neural activity with outcomes (a biological form of reinforcement learning).

5. Structural Plasticity

  • Biological neurons can grow new synapses or eliminate existing ones, reshaping the network itself:
    • Dendritic Spine Growth: Tiny protrusions on dendrites grow or shrink based on activity.
    • Synaptic Pruning: Weak or redundant connections are removed to improve efficiency.
  • This adaptability occurs on a slower timescale compared to LTP/LTD but is crucial for learning and memory.

6. Glial Cell Involvement

  • Non-neuronal cells, such as astrocytes, contribute to synaptic regulation by modulating neurotransmitter availability and signaling.
  • They can influence long-term changes in synaptic strength.

image


Biological vs. Backpropagation

Feature Biological Neurons Backpropagation
Learning Rules Localized (e.g., spike timing, neurotransmitter availability). Global error signals adjust weights.
Energy Efficiency Low-power, asynchronous processes. Computationally expensive operations.
Control Decentralized; no central error mechanism. Centralized error propagation.

These decentralized mechanisms enable the brain to adapt in real-time, supporting complex behaviors and continuous learning. While backpropagation underpins artificial neural networks, biologically plausible alternatives—like STDP and reinforcement-like learning rules—are increasingly influencing neuromorphic computing and AI research.

Spiking Neural Networks

image

References

Clone this wiki locally