Neuromorphic Computing — The future of AI?
Neuromorphic Computing — The future of AI?
The emergence of numerous nascent technologies with the growing age has led the technology to become pervasive in society and ubiquitous in our lives, the desire for embedded-everywhere and human-centric computational intelligence systems calls for an intelligent computation paradigm. All conventional computers are based on the classic von Neumann architecture, the age of traditional computers is reaching its limit, and the need for major design transformation which can bypass the technology threshold and improve performance has become a necessity.
With the progressing trend of AI and neural networks, hardware support for computational resources must also keep pace. Our present computational system employs upon traditional hardware, i.e. CPUs and GPUs, and the underlying problems with these traditional hardwares becomes a major challenge for running a neural network efficiently.
CPUs can multitask and can perform complex operations at fast speeds but the distributed nature of neural networks makes them computationally cumbersome to run them on classic computers whereas GPUs can do a lot of parallel processing and are especially good at performing matrix multiplication, which is the core of neural networks, but again the dissimilarity between the physical structure of GPUs and nature of the neural networks causes inefficiency and expensive power consumption.
Neuromorphic computing chips are inspired by the working mechanism of the human brain, they possess massive parallel computing power along with closely coupled memory and computing space along with better power efficiency.
Neuromorphic chips are physically structured like a neural network, it contains several processing units which are like a single neuron and are capable of doing functionalities of a single neuron.
It encompasses any electrical device which mimics the natural biological structures of our nervous system. The goal is to impart cognitive abilities to a machine by implementing neurons in silicon. Due to its much better energy efficiency and parallelism, it is being considered as an alternative over conventional architectures and energy-hungry GPUs.
Cognitive computing views the brain as a computer and thinking as the execution of algorithms. Memory is a container that holds data.
Neuromorphic algorithms emphasize the temporal interaction between processing and memory.
- Every message(spike) has a timestamp (explicit or implicit).
- Computation is often largely event-driven — each neuron need not be updated at every time step. Only the ones which are in action require power.
One of the most essential features of the neuromorphic chip is the connection between the neuron, known as synapses. These synapses give neuron combines the outputs of many other neurons (up to thousands) via addition or subtraction in generating its own output. The strength and polarity of these synapse connections constitute memories, which may change over time owing to patterns of use, enabling adaptation of both memory and logic. This enables neuron circuits to be particularly good at parallel processing for pattern matching and memory retrieval.
The dense neural network that runs on neuromorphic chips is called Spiking Neural Networks. These neural networks are connected through the synapses and they function just like the way a human brain works, along with neuromorphic hardware.
The Tianjic chip
This neuromorphic chip contains 40,000 neurons and 10 million synapses, used in self-driving bikes performed 160 times better and 120,000 times more efficiently than a comparable GPU.
IBM’s TrueNorth — The Hercules of Transistor Count
It has 4,096 cores, with 5.4 billion transistors. It is IBM’s largest chip in transistor count and it is 10,000 times more energy-efficient than conventional microprocessors and only uses power when necessary.
MIT’s — Brain on A Chip
A chip built from silicon geranium and with “more than 100 trillion synapses that mediate neuron signaling in the brain”. It could be used in making humanoids and autonomous driving technology.
Intel’s Loihi chips and Pohoiki Beach computers
Loihi chips has130 million synapses and 131,000 neurons per chip. It is optimized for spiking neural networks. Pohoika packs 8.3 million neurons and delivers 1000x better performance and is 10,000x more energy efficient than equivalent GPUs.
Qualcomm’s — Zeroth processors
Working on three main goals of “biologically inspired learning”, Qualcomm is developing new computer architecture that dismantles the traditional mold.
Revolution for AI
Neuromorphic Computing is the 5th generation of AI. The 1st generation AI defined rules and followed classical logic. 2nd generation AI used deep learning networks. The 3rd generation AI interpreted and adapted like the human thought process and the 4th generation AI used a mix of different machine learning algorithms and other forms of Artificial Intelligence algorithms to achieve their goal or mission.
The smaller size and very low power consumption of the neuromorphic chips make them suitable for running AI-based algorithms. Thus expected to greatly accelerate the efficiency of artificial intelligence (AI) applications and affecting the future of AI and neural networks.
“These new kind of chips should increase dramatically the use of machine learning, enabling applications to consume less power and at the same time become more responsive.”-Deloitte market analysis
- Accelerating the computation power(can process multiple facts, learn tasks and patterns at high speed) of Neural Network tasks by several folds (nearly 10K times) as compared to traditional hardware.
- Consume low power(up to 1000 times less) and improve resource utilization for Artificial intelligence tasks.
- Much like a human, they’d be capable of adapting to their environment.
Potential applications of this technology in the field of AI:
- Driverless Cars and Bikes
- Smart home electronic devices
- Data Analytics
- Process Optimization
- Real-time image processing for use in police cameras
With neuromorphic computing for AI, the future seems to be bright.
Discover Past Posts