Neuromorphic computing – a promising branch of computing

Neuromorphic computing – a promising branch of computing

Neuromorphic computing uses hardware-accelerated Spiking Neural Networks to achieve power-efficient AI training. Spiking Neural Networks or SNNs represent a class of Artificial Neural Networks (ANN) that more closely resemble actual brain neurons. Neuromorphic computing simply uses specialized hardware to run SNNs. This hardware saves time and power consumption of systems that compute SNNs. 

This article is quite abstract in the sense that it expects you’re to know how artificial neural networks work. For a refresher check out our article on Neural Networks or on recommender engines one common end-goal of NN models. 

What are Spiking neural networks

SNNs were originally models of biological information processing. They were used to better understand how brains work and not intended as a training model for data science. But as most promising academic pursuits, SNNs have found some practical use in data science. 

In a “simple” neural network you have levels of neurons, each takes input from previous layers and passes the output if the values are higher than a “threshold”. Then you do a bunch of runs in order to tweak each neuron in order to reach your desired accuracy. Approaches for training come from choosing an algorithm for calculating values and thresholds and updating values after each run. This description is oversimplified, but its enough for our comparison. 

SNNs besides the threshold also take into account temporal factors of their input before firing. Each neuron is connected to multiple input neurons, but it fires only if enough of its input neurons passed input in a timeframe. Such neurons don’t need a centralized clock as classical computing CPUs need. Neurons only care about the input they receive in the last few moments and based on it decide to fire on not. 

 This neuron activation behavior is why we call them Spiking Neural Networks. In a sense, the simulated neurons behave similarly to how neurons fire in actual brains. For data science, you can think of this as another depth layer that the learning algorithm can tweak.

Brain neurons fire similar spike trains to simulated neurons in neuromorphic computing
Brain neurons fire similar spike trains to simulated neurons in neuromorphic computing

What makes neuromorphic computing so compelling

The main motivation behind neuromorphic computing is its potential for near-instantaneous data processing of event-based data.  Just like your brain continuously monitors all your senses and only occasionally decides to actually pay attention to something it senses.

As previously mentioned, each neuron decides to spike-based on what inputs it received. This means that a fraction of the neural network activates (and computes) based on any given input. This gives SSNs additional flexibility because they start giving partially-accurate results as soon as they receive data. Its output just gets more accurate as more data flows in. 

This behavior is critical for low-latency applications such as machine vision, robotics, self-driving cars, sound analysis, real-time control, and similar computational and AI problems. A human equivalent is you being able to remove the hand from a hot stove before the full pain kicks in and before you fully understand what happened. It may not be the most informed decision of your life, but as in many other cases waiting for more information would just make everything worse. 

Why create specialized Neuromorphic computing hardware

The traditional way of explaining this will mention the sequentiality of Von Neuman’s computer architecture and its separation of memory and computing. That explanation sounds simple, but in reality, requires taking some CPU design courses to understand. 

Hardware vs simulation

The simple explanation is that simulating neural networks in software is much more compute-intensive than running it on specialized hardware. Neuromorphic computing requires a drastically different design compared to classical computers. This makes savings from specialized hardware considerably more effective than with systems that share more classical computing characteristics.

In essence, each neuron is individually implemented in hardware. Each one performs few relatively trivial calculations based on the received input. And each neuron’s “working memory” is basically just what it needs to perform such calculations. This helps reduce the inefficiency of having the CPU wait to retrieve memory from RAM or storage. 

Power savings of neuromorphic computing

As previously mentioned, neurons don’t do anything unless fired, and they fire based on electrical input. This means that they use little to no energy when not fired. This is part of the energy-saving selling point of neuromorphic computing. The main energy saving part comes because only a fraction of neurons fires and needs to be computed, compared to classical neural networks, where all nodes have to be processed before the final output is generated. 

Doesn’t sound like much, but it may prove significant in ultra-low-power integrated solutions. Using neuromorphic sensors that feed into neuromorphic computers, you would be able to process a lot of data and near-instantaneously extract information. And all this at much less power than would be possible with traditional computers. Even supercomputer-sized brain simulations like BrainScaleS with 4 million neurons save power because neurons don’t consume much when not in use. 

The current state of Neuromorphic computing research

Let’s start with the obvious, the field is in very early research, and as such it’s not ready to leave the lab. It shows progress but SNNs can’t match the accuracy of more traditional ANNs and machine learning solutions. This is mainly because we need to improve training algorithms for training SNN models. And also as SNNs are drastically different than traditional neural networks, the benchmarks are not as adequate for measuring SNN accuracy. 

Also, the inference of SNNs is considerably different than traditional neural networks. As neuromorphic computing works in “spike trains” both input and output is a series of neurons firing. Meaning that you either need to create systems that generate and receive such input, or you have to convert to something receiving system can understand (eg: category classification of images). There is much more work that needs to be done before SNN inference becomes easier to implement in an efficient way. 

Conclusion

This is one of the technologies that work with or replace classical computing in some applications. Its part of the wider pattern where classical computing is coming to a hard limit where simply cramming more transistors on a chip won’t be the sure way of increasing computer performance. It’s quite likely that we will soon see major breakthroughs in neuromorphic computing as well as real-world applications of its concepts. 

Neuromorphic computing is a promising technology that could potentially change the landscape of future computing. This is why big players such as European Union, Intel, Qualcomm, Samsung, and IBM are pouring massive resources into the field. Stay tuned for part 2 where we will look at current software and hardware implementations of neuromorphic computers. It will be a nice starting point to get your hands on a field with an admittedly very cool name. 

SNN categorization problem: Cat or Kitten
Cat-egorize this picture: Is it a Cat or a Kitten?
Photo by Amir Ghoorchiani on Pexels.com

Like, think about it. Without knowing what I know, if somebody in a bar tells me “I work in neuromorphic computing”, I will auto-assume that that person is smart and working in a cool field. Where in reality it’s just teaching a computer to figure out if there’s a cat on the picture or not. No wait, getting paid for doing that still sounds very cool.

Further reading

Back to top