Deep learning is “not actually learning,” said Mike Davies, head of Intel’s neuromorphic computing unit at this year’s International Solid State Circuits Conference. His words revived the neuromorphic debate through the AI community at a time when data bandwidth constraints and computational requirements escalate.
2024 is expected to be the start of the neuromorphic revolution, according to a recently published report by Yole Développement (Lyon, France).
In an interview with EE Times Europe, Pierre Cambou, Principal Analyst for Imaging at Yole, explained that neuromorphic sensing and computing could solve most of AI’s current issues while opening new application perspectives in the next decades. “Neuromorphic engineering is the next step towards biomimicry and drives the progress towards AI.” With its research and innovation ecosystem, Europe is well-positioned to foster its scientific culture and embrace the technological opportunity.
BEFORE AND AFTER 2012
The neuromorphic trend started in the late 1980s when Carver Mead, an electrical engineer at the California Institute of Technology, introduced the concept of neuromorphic engineering. In the next decade, however, researchers experienced little practical success in building machines with brainlike ability to learn and adapt. Hope resurged when Georgia Tech presented its field programmable neural array in 2006 and MIT researchers unveiled a computer chip that mimics how the brain’s neurons adapt in response to new information in 2011.
The turning point was the publication of the paper, “ImageNet Classification with Deep Convolutional Neural Networks” by a group of scientists from the University of Toronto. The AlexNet architecture, comprising of an 8-layer convolutional neural network, made it possible to classify the 1.2 million high-resolution images in the ImageNet contest into one of the 1,000 categories (e.g. cats, dogs). “It is only with the development of AlexNet that the deep learning approach proved to be more powerful and started to gain momentum in the AI space.”
Most current deep learning implementation techniques rely on Moore’s Law, and “it works fine.” But, as deep learning evolves, there will be more and more demand for chips that can perform high computational tasks. Moore’s Law has been slowing down lately, and has led many in the industry, including Yole Développement, to believe it won’t be able to sustain deep learning progress. Cambou is among those that believe deep learning “will fail” if it continues to be implemented the way it is today.
Cambou called for a disruptive approach that utilizes the benefits derived from emerging memory technologies and improves data bandwidth and power efficiency. That is the neuromorphic approach. “The AI story will keep on moving forward, and we believe the next step is in the neuromorphic direction.”
The recent years have witnessed considerable efforts in building neuromorphic hardware that conveys cognitive abilities by implementing neurons in silicon. For Cambou, this is the way to go as “the neuromorphic approach is ticking all the right boxes” and allows far greater efficiencies. “Hardware has enabled neural networks and deep learning, and we believe it will enable the next step in neuromorphic AI. Then we can dream again about AI and dream about AI-based applications.”
Neuromorphic hardware is moving out of the research lab with a convergence of interests and goals from the sensing, computing and memory fields. Joint ventures are being formed, strategic alliances are being signed, and decade-long research initiatives such as the European Union’s Human Brain Project and Blue Brain Project are being launched.
“The geographic mapping is extremely interesting,” said Cambou. “The image sensing efforts come mostly from Europe and Israel, the memory ecosystem is almost 100% in the U.S., while the computing ecosystem is more balanced between Asia, Europe and the United States.”
A neuromorphic sensing ecosystem has indeed emerged, with its roots originating from the invention of a Silicon Retina by Carver Meads’ student Misha Mahowald at the Institute of Neuroinformatics and ETH Zurich in 1991. Insightness was founded in 2014 as a spinoff of the ETH Zurich and the University of Zurich. It designs vision sensors that allow motion detection within milliseconds even if the sensor itself is moving. Other spinoffs from ETH Zurich include iniLabs, event-driven cameras developer, and iniVation, neuromorphic event-driven vision systems developer. Similarly, Prophesee, formerly known as Chronocam, bloomed from breakthrough research conducted by France’s Vision Institute (CNRS, UPMC, INSERM) on the human brain and eye over the past twenty years.
Besides sensing, the computing scene is large and diverse with prominent players like Samsung, Intel, and SK Hynix. Among emerging and well-established startups, Cambou indicated that General Vision, whose pattern-classification technology is Intel’s Quark SE neural network, was founded by French expatriate Guy Paillet, Brainchip, whose Akida neuromorphic system-on-chip comprises no less than 1.2 million neurons and 10 billion synapses, has its origins in Toulouse, while AnotherBrain, which recently raised another €19 million in funding to accelerate development of its software-based organic AI chip, and GrAI Matter Labs, developer of ultra-low power, fully programmable neuromorphic computing, are headquartered in Paris. On the European scene, he cited aiCTX, a Zurich-based neuromorphic computing company, and on the global scene Nepes (Seoul, South Korea), Vicarious (San Francisco, Calif.) and Celepixel (Shanghai, China).
Neuromorphic computing efforts also come from memory players like US-based Micron, Western Digital and Korean SK Hynix, but many are seeking more short-term revenues and ultimately may not become strong players in the neuromorphic research. “We should look at small players that have chosen neuromorphic as their core technology,” Cambou said.
Disruptive memory startups such as US-based Robosensing, Memry, Symetrix Weebit, Knowm and France-based Weebit Nano are combining non-volatile memory technology with neuromorphic computing chip designs. They have emerged alongside pure-play memory startups such as Crossbar and Adesto, but their memristor (memory resistor) approach is often perceived as more long-term than efforts from pureplay computing companies. “A lot of memory players are working on RRAM and phase-change memories to mimic the synapse,” said Cambou. Also, “the MRAM [magnetoresistive random access memory] is part of the emerging memories that will help the neuromorphic approach to succeed.”
“I would not say that Europe has more strength than the competition, but it is known to everyone that the AI research centers are good,” said Cambou. “There is good hope that a lot of strength on neuromorphic research comes from Europe.”
STRONG MARKET POTENTIAL
Not only does neuromorphic sensing and computing make sense from a technology point of view, but also from a market point of view.
No significant business is expected before 2024, but the value of the opportunity could be big for decades after that. According to Yole, if all technical questions are solved in the next few years, the neuromorphic sensing could grow from $43 million in 2024 to $2 billion in 2029 and $4.7 billion in 2034. Even better, the neuromorphic computing market could rise from $69 million in 2024 to $5 billion in 2029 and $21.3 billion in 2034.
JUST LIKE 3D
“Automotive is probably the most obvious market,” said Cambou. Within the next decade, the availability of hybrid in-memory computing chips should unlock the automotive market, desperately awaiting a mass market autonomous driving technology.
In the short term, neuromorphic sensing and computing will be used for mobile and consumer applications as well as fault detection and always-on monitoring of industrial machines. It will also play a major role in logistics, food automation and agriculture.
“There are similarities with 3D,” noted Cambou. At the beginning of this decade, “3D was a hard topic, but now it has really entered our lives, either in our iPhone or for payments in China or for ATMs in Japan. 3D is now a ubiquitous technology, and we can’t think of an autonomous car without a Lidar. The same scenario could apply to the neuromorphic approach.”
Today’s computers are unable to process real-time information, but neuromorphic has the ability to understand time. And “while deep learning needs huge data sets, neuromorphic learns extremely quickly from only a few images or a few words. We live in a world of interactions; neuromorphic will be very strong in giving computers the understanding of unstructured environments.”
Anne-Françoise Pelé is a Correspondent for EE Times and serves as Editor-in-chief for EE Times Europe.