19.5 C
New York
Sunday, May 19, 2024

Why Spiking Neural Networks is the next leap in AI

The data and infrastructure supporting artificial intelligence development have come a long way in the last decade. The Tensor Processing Unit (TPU) was developed and the database was standardized and implemented. Bringing AI into the real world is a fundamental problem. It’s electricity. I’m not talking about political power, but power consumption. GPUs and TPUs are notoriously heavy consumers. Even Google’s built-in TPU SoM is eager to deploy. If we want AI to be able to explore the world the way we do, it needs to let loose on a server the size of a farm. This infrastructure is expensive to operate and maintain. These constraint costs lead to the monopoly of the larger unit in a particular AI model.

why they eat so much Their power consumption is due to the fundamental flaw of all computers: the von Neumann bottleneck. From microcontrollers to Intel processors, all computers today are designed according to the same principles defined by von Neumann’s architecture. This type of architecture has one drawback. The memory is always separate from the processing unit. Faster memory (cache > RAM > HDD) is closer to the processor and limits latency. Yes, the processor calculates the numbers, but mostly it’s trying to remember where you put the file, and if it’s not cached, that’s a shame. Send an information request and go to the next point. Many valuable CPU cycles are wasted loading data from slow memory. This is called the von Neumann bottleneck, and von Neumann himself knew it.

The brain is not like that.

An architecture particularly suited to artificial intelligence and the dream of neuroscientists and computer engineers is the NeuroMorphic Processor (NMP). You’ve probably already guessed that NMP is inspired by the brain. CPU and CPU have different memories, while neurons do not. The “memory” of the brain is in every synapse. Therefore, it is distributed and close to the processing unit, the neuron. That’s why organizations large and small, like IBM, Intel, Qualcomm, and the US Department of Defense, quietly develop technology when no one is paying attention. Of course, quantum computing will redefine computing … but NMP is around the corner. An artificial neural network (ANN) can be easily converted into an NMP-supported neural network (SNN).

This section by Blouw et al. Comparison of GPU, CPU and Intel Loihi NMP to show the power required to process a single sample.


Isn’t there an advantage to using a GPU to process multiple samples at the same time? In fact, we can see the results of batch printing.

So to get closer to Intel’s NMP called Salmon, you need 64 samples on a GPU. For example, if you are trying to process a standard 24 fps video, you need to wait about 2.66 seconds to collect data to reduce power consumption. Such is life…

Read More……………..

Uneeb Khan
Uneeb Khan
Uneeb Khan CEO at blogili.com. Have 4 years of experience in the websites field. Uneeb Khan is the premier and most trustworthy informer for technology, telecom, business, auto news, games review in World.

Related Articles

Stay Connected


Latest Articles