GrAI Matter Labs (GML), a startup from Paris, has announced its first artificial intelligence (AI) processor for ultra-low latency and low power processing at the edge, which expects to start sampling this quarter.
The GrAI One chip, based on its NeuronFlow technology, drastically reduces application latency according to the company; it reduces the end-to-end latencies for deep learning networks such as PilotNet to the order of microseconds. In autonomous navigation this latency is around 20µs, while in keyword recognition latency comes down 10µs and for hand gestures around 1µs.
Aimed at response-critical edge applications in autonomous navigation, human-machine interaction and smart healthcare markets GML’s NeuronFlow utilizes in-memory compute with a mesh of cores and local neuron/synapse memories, avoiding the memory bottleneck of traditional Von Neumann architectures. It is based on a fully digital design with packet-switched connectivity and sparsely connected event-based neural networks to allow scalable implementations across market segments. The company said it combines a dynamic dataflow with neuromorphic computing to produce massively parallel in-network processing.
The fully digital chip measures 20mm2 in TSMC 28nm technology and implements a mesh of 196 neuron cores with local neuron/synapse memories for a total of 200,000 neurons. It provides a GPIO interface to offload latency-critical AI workloads of a host processor. At 100% neuron core utilization, GrAI One consumes as little as 35mW, according to the company.
CEO Ingolf Held said, “GrAI One processes edge AI applications orders of magnitude faster than traditional architectures while maintaining a power footprint suitable for battery powered devices. With GrAIFlow we offer our customers the opportunity to explore new ultra-low latency use cases, thereby bringing their innovation to every device on the edge.”
The GrAI One chip is supported by GML’s GrAIFlow software development kit, capable of both conventional program execution and machine learning computation via industry-standard languages like TensorFlow, Python and C++. The kit includes compiler, simulator, debugger, graphical editor and compute and network APIs.
GrAI Matter Labs was founded in 2016 as Brainiac within the iBionext start-up studio in Paris by Ryad Benosman, Bernard Gilly, Giacomo Indiveri, Xavier Lagorce, Sio-Hoi Leng, Bernabe Linares-Barranco and Atul Sinha, a team combining extensive experience in neuromorphic computing, silicon design, and entrepreneurship. Benosman and Gilly also co-founded Prophesee, who just raised $28 million for event-based vision.
With offices in Paris, Eindhoven, and Silicon Valley, GML’s technology is based on research carried out at the Vision Institute in Paris on the human brain during the past 20 years. Inspired by the human biology, the company’s neuromorphic computing technology overcomes the limitations of Von Neumann machines, based on massively parallel and fully programmable sensor analytics and machine learning at significantly reduced power consumption. Speaking last year when the company raised $15 million in external funding, Ryad Benosman, the company’s co-founder and chief scientific officer said, “Our neuromorphic approach to computing and machine learning achieved impressive results in a hardware/software prototyping project under a DARPA grant. The prototype demonstrated how a new compute paradigm could enable AI completely locally on an edge device without cloud computing support.”