Neuromorphic computing is a novel approach to computing. Using tailor-made hardware, neuromorphic computing can provide power-efficient AI training. We already covered what Neuromorphic computing is and how it works in a previous article.
This article is quite abstract in the sense that it expects you’re to know how artificial neural networks work. For a refresher check out our article on Neural Networks or on recommender engines one common end-goal of NN models.
Neuromorphic computing uses hardware-accelerated Spiking Neural Networks. They promise to provide near-instantaneous event-driven computing. This can increase both the data throughput and power efficiency of systems.
Unlike a classical computer, they don’t depend on an internal clock. Input neurons simply propagate a spike train through the network that results in a solution. This makes them challenging to train as well as expensive to simulate on classical CPUs.the whole field is quite young and there are not many tried and proven approaches to implementing, training, and using SNNs.
We will now take a look at the state of neuromorphic computing. Knowing what’s available can be useful for all who strive to understand this field. We will be looking at both state-of-the-art hardware implementations of neuromorphic computing and software for simulating and training SNNs. After all, it’s quite unlikely that you will develop a real-life solution powered by a neuromorphic computer if you never try it out.
A lot of scientific applications require massive amounts of computing capacity commonly provided by large supercomputers. At the moment there are a few massive supercomputer-scale deployments of neuromorphic computers. Some of the systems below simulate actual brains. Others serve as large-scale neural network & AI computing systems.
One of the first large-scale systems developed by the Human Brain Project. Human Brain Project is a large international partnership supported by the European Union. The completed supercomputer integrates over 1 million cores composed out of custom 18-core ARM chips. While not natively running SNNs, it is one of the first supercomputers specifically designed for brain simulation research.
The next supercomputer developed by Human Brain Project uses custom-made neuromorphic chips. It is able to simulate nearly 4 million neurons with nearly 880 million synapses. Interesting to note is that both SpiNNaker and BrainScaleS can be accessed by researchers and industry experts, in some cases even for free.
US Airforce and IBM partnered in order to build this supercomputer. Its stated goal is to develop an extremely power-efficient supercomputer based on IBM TrueNorth neuromorphic processors.
Neuromorphic computing processors & SoC
Now let’s take a look at a few neuromorphic processors. As mentioned in a previous article, neuromorphic computing heavily benefits from specialized hardware. Such hardware might prove useful for all attempting to move the neuromorphic computers from a lab to a real-life application.
IBM’s take on neuromorphic computing. The TrueNorth chip has 1 million neurons and 256 million synapses. Commonly it’s deployed in 16chip daughter boards. Estimated power usage is rated between 65-100mW (Yes, that “m” is not a typo!).
Intel’s contender in the field of neuromorphic computing. Lohi is optimized for different neural network workloads. This makes it interesting for AI research in general and not only for neuromorphic computing research. But for SNNs it’s capable of simulating 130 000 neurons on a single chip.
Another EU-supported initiative that developed a Neuromorphic system-on-chip. Sadly as it’s more of a research effort, there arent much more “hard data” on the chip outside the EU project report.
Akida is the commercial production of BrainChip an Australian company specializing in AI hardware. It provides 1.2 million neurons and 10 billion synapses. Akida can be used as a co-processor or as an embedded system-on-chip directly integrated into electronics such as smartphones.
Spiking Neural Networks simulation and training software
In the end, Spiking Neural Networks are still just neural networks. And as such should be computable on regular computers. As with most Neural Network software the high-level logic is commonly implemented through python modules. Below you can find a few examples of software that simulates and trains SNN. After all what good is having cutting-edge hardware if you have no clue on how to train such a network.
- Norse – Builds on top of PyTorch primitives in order to provide Spiking Neural network components that can be directly used in your training algorithms.
- BindsNet – Also, build on top of PyTorch. BindsNet strives to assist implementation of biologically inspired machine learning and neural network algorithms.
- Briansimulator – Another python-based spiking neural network simulator
- Bran2 – the second iteration of the Brian simulator.
- SNN simulator – A Spiking Neural network based on Spike-Time Dependent Plasticity coded from the ground up to maximize performance and power efficiency.
- PySNN – a project that provides low-level objects that can easily be extended for different custom neurons for SNNs.
- Pytorch example – We mentioned “build on top of PyTorch” quite a number of times here. On this link, you can find a pure PyTorch implementation of an SNN.
- sPyNNaker – Project holds code for SpiNNaker 18-core custom ARM chips. But as part of their software, it offers utilities that help run SNNs on regular CPUs and GPUs.