Trend AnalysisEngineering

Neuromorphic Computing: Brain-Inspired Chips for Energy-Efficient AI

Training GPT-4 consumed an estimated 50 GWh of electricity โ€” roughly the annual consumption of 5,000 US households. As AI models grow, energy costs become unsustainable. The human brain, by contrast, ...

By Sean K.S. Shin
This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.

The Question

Training GPT-4 consumed an estimated 50 GWh of electricity โ€” roughly the annual consumption of 5,000 US households. As AI models grow, energy costs become unsustainable. The human brain, by contrast, performs sophisticated perception and reasoning on ~20 watts. Neuromorphic computing aims to bridge this efficiency gap by building hardware that mimics the brain's architecture: event-driven spiking neurons, massive parallelism, co-located memory and computation, and local learning rules. Can neuromorphic chips deliver the energy efficiency of biological brains while matching the accuracy of conventional deep learning?

Landscape

Malviya et al. (2024) reviewed the principles and innovations in neuromorphic systems. They identified three key architectural departures from von Neumann computing: (1) event-driven processing โ€” neurons compute only when they receive input spikes, eliminating idle power consumption; (2) in-memory computing โ€” synaptic weights are stored at the computation site (no memory-processor data shuttle); and (3) temporal coding โ€” information is encoded in spike timing, not just amplitude, enabling richer representations with fewer operations.

X. Wang et al. (2025) reviewed memristor-based neuromorphic systems, focusing on threshold-switching memristors (TSMs) that naturally produce spiking dynamics. Memristors โ€” resistive devices whose conductance changes with applied voltage history โ€” physically implement synaptic plasticity, enabling on-chip learning without the weight-update bottleneck of digital implementations. Their review documented sub-picojoule energy consumption per synaptic operation, orders of magnitude below GPU-based equivalents.

On the software side, Bรฉna et al. (2024) demonstrated event-based backpropagation on SpiNNaker2 โ€” a major neuromorphic platform developed at the University of Manchester. Traditional backpropagation requires synchronous forward and backward passes incompatible with asynchronous spiking hardware. Their event-based variant processes errors locally and asynchronously, enabling on-chip training demonstrated on a proof-of-concept basis.

Stuck et al. (2024) proposed burst-dependent learning โ€” a biologically plausible alternative to backpropagation where learning signals are encoded in bursts of spikes rather than gradient signals. This algorithm achieves competitive accuracy on benchmark tasks while remaining implementable on neuromorphic hardware without the non-local information backpropagation requires.

Key Claims & Evidence

<
ClaimEvidenceVerdict
Neuromorphic hardware achieves orders-of-magnitude energy savingsMemristor synapses operate at sub-pJ per operation (X. Wang et al. 2025)Supported for inference; training energy comparison is less clear
On-chip learning is achievable on spiking hardwareEvent-based backpropagation on SpiNNaker2 (Bรฉna et al. 2024); burst-dependent learning (Stuck et al. 2024)Demonstrated; accuracy gap vs. GPU-trained models persists
Memristors can physically implement synaptic plasticityConductance changes mimic long-term potentiation/depression (X. Wang et al. 2025)Supported; device variability and reliability remain challenges
Neuromorphic computing suits edge AI deploymentLow power enables always-on sensing and processing (Malviya et al. 2024)Supported for specific applications (keyword detection, anomaly sensing)

Open Questions

  • Accuracy gap: SNNs trained on neuromorphic hardware typically achieve 1โ€“5% lower accuracy than equivalent ANNs on standard benchmarks. Can this gap be closed, or is it an inherent trade-off for energy efficiency?
  • Programming model: There is no equivalent of PyTorch or TensorFlow for neuromorphic hardware. Can standardised programming frameworks accelerate adoption beyond hardware specialists?
  • Scaling: Intel's Loihi 2 has ~1 million neurons; the brain has ~86 billion. Can neuromorphic hardware scale by 5 orders of magnitude while maintaining the energy efficiency advantage?
  • Killer application: Where does neuromorphic computing offer not just energy savings but capability advantages over conventional computing? Real-time edge sensing (always-on microphones, radar, tactile) is the current leading candidate.
  • Referenced Papers

    • [1] Malviya, R.K. et al. (2024). Neuromorphic Computing: Advancing Energy-Efficient AI Systems. Nano Technology & Nano Science. DOI: 10.62441/nano-ntp.v20is14.99
    • [2] Wang, X. et al. (2025). Memristor-Based Spiking Neuromorphic Systems. Nanomaterials, 15(14), 1130. DOI: 10.3390/nano15141130
    • [3] Bรฉna, G. et al. (2024). Event-based backpropagation on SpiNNaker2. IEEE NICE. DOI: 10.1109/NICE65350.2025.11065716
    • [4] Stuck, M. et al. (2024). A burst-dependent algorithm for neuromorphic on-chip learning. Neuromorphic Computing and Engineering. DOI: 10.1088/2634-4386/adb511
    • [5] Martis, L. et al. (2025). SYNtzulA: Open-Source Hardware for Energy-Efficient SNN Inference. ACM. DOI: 10.1145/3706594.3726979

    References (5)

    (2024). Neuromorphic Computing: Advancing Energy-Efficient AI Systems through Brain-Inspired Architectures. Nanotechnology Perceptions, 20(S14).
    Wang, X., Zhu, Y., Zhou, Z., Chen, X., & Jia, X. (2025). Memristor-Based Spiking Neuromorphic Systems Toward Brain-Inspired Perception and Computing. Nanomaterials, 15(14), 1130.
    Bรฉna, G., Wunderlich, T., Akl, M., Vogginger, B., Mayr, C., & Gonzalez, H. A. (2025). Event-based backpropagation on the neuromorphic platform SpiNNaker2. 2025 Neuro Inspired Computational Elements (NICE), 1-10.
    Stuck, M., Wang, X., & Naud, R. (2025). A burst-dependent algorithm for neuromorphic on-chip learning of spiking neural networks. Neuromorphic Computing and Engineering, 5(1), 014010.
    Martis, L., Leone, G., Raffo, L., & Meloni, P. (2025). SYNtzulA: Open-Source Hardware for Energy-Efficient Spiking Neural Network Inference. Proceedings of the 22nd ACM International Conference on Computing Frontiers: Workshops and Special Sessions, 70-73.

    Explore this topic deeper

    Search 290M+ papers, detect research gaps, and find what hasn't been studied yet.

    Click to remove unwanted keywords

    Search 8 keywords โ†’