AI Needs Brain-Inspired Technology for Real Intelligence

Everywhere you turn today, there’s talk about AI transforming or disrupting a process or system. But artificial intelligence is not real intelligence, because it doesn’t have the ability to react to unknown situations. The typical AI model uses lots of data and computing power, but it is just a black box that can’t explain how it came to a decision.
That is what Bruno Maisonnier, CEO and founder of French startup AnotherBrain, repeatedly emphasized in an interview with EE Times Europe last month. Achieving true intelligence in AI so consumes him that when his previous startup, Aldebaran Robotics, was acquired by SoftBank for US$100 million in 2013, Maisonnier stepped down from the robotics company to pursue his “general artificial intelligence” venture.

“Robots are great, but at the same time, they are totally stupid,” Maisonnier said. “They are unable to understand the basics; they are not able to understand their surroundings. In 2010, I started looking at AI and into providing robots with the ability to behave more naturally, more like we might expect them to behave.” He looked at deep-learning systems but concluded that “deep learning, what everyone is calling AI today, is a lie.
“In deep learning, neural networks, big data, and AI, there is not a single drop of intelligence,” Maisonnier said. “It’s very powerful, and there are a lot of possibilities — I’m not discrediting it. But there is no intelligence.”

Bruno Maisonnier, AnotherBrain CEO (Image: AnotherBrain)

Inspired by the book “On Intelligence” by PalmPilot inventor Jeff Hawkins, Maisonnier looked at implementing some of Hawkins’s ideas. “I created AnotherBrain to deliver this promise — the promise of creating a more general intelligence.” He figured out the framework but was missing the electronics part; he wanted to have the system embedded into a chip. So Maisonnier brought on brain vision specialist Patrick Pirim as his chief science officer.

Maisonnier takes pains to highlight that the world is not predictable and that we are nowhere near the ability to build autonomous cars or even sophisticated robots with the technologies available today. “Real intelligence is a system able to analyze and understand the way our brain does, in real time, without needing huge quantities of data and in a very frugal way,” he said. “This needs to be done in a chip consuming a fraction of a watt versus, say, 15 or 20 watts for the inference phase of deep learning and thousands of watts for the learning phase of the training.”

The chip doesn’t need big data or huge quantities of data, and the result should be explainable, he added. To illustrate what he means by explainability, he cited the example of a self-driving car that identifies a motorcycle. The car’s camera and sensors discover the object and determine it is a motorcycle, but can they explain why it is a motorcycle beyond just ticking the box because that’s what’s in the database? Currently, they can’t. But a human driver asked to explain why an object is a motorcycle might point out that it has thick and dark wheels, a motor in the middle that makes a distinctive sound, a fuel tank, and a helmet-wearing rider. “This is explainability,” said Maisonnier. “This is what we are [working on at AnotherBrain]. We already demonstrated our proof of concept for a French car manufacturer on an industrial production line for quality control.”

WHY THE AI MADE THAT DECISION

Explainability enables a system to more closely mimic the way nature works. In its forecast of the top 10 trends for data and analytics in 2019, Gartner says that explainable AI is necessary because AI models are increasingly deployed to augment and replace human decision making, and in some scenarios, businesses need to justify how these models arrive at their decisions. Most advanced AI models, however, are complex black boxes that cannot explain why they reached a specific recommendation or decision.

Explainable AI (XAI) concept (Image: DARPA)

There is a growing movement to create systems that are better able to say why they made a decision. The U.S. Defense Advanced Research Projects Agency (DARPA) has an explainable AI (XAI) program to enable “third-wave AI systems,” which would understand the context and environment in which they operate and, over time, build underlying explanatory models that allow them to characterize real-world phenomena.

Researchers from TU Berlin, Fraunhofer Heinrich Hertz Institute (HHI), and Singapore University of Technology and Design (SUTD) published a paper in Nature last month that looks at how AI systems reach their conclusions and whether their decisions are truly intelligent or just “averagely” successful. It explored the importance of comprehending the decision-making process but questioned whether the learning is valid and generalizable or whether the model has based its decision on a spurious correlation in the training data.

LEARNING TO WALK IN FIVE MINUTES

In parallel, bio-inspired AI promises to overcome the limitations of current machine-learning approaches, which rely on pre-programming a system for all potential scenarios — a process that is complex, labor-intensive, and inefficient. Another DARPA program, Lifelong Learning Machines (L2M), is developing systems that can learn continuously during execution and become increasingly expert while performing tasks. Announced in 2017, L2M is carrying out research and development of next-generation AI systems and their components as well as learning mechanisms in biological organisms capable of translation into computational processes. L2M supports a large base of 30 performer groups via grants and contracts of varying duration and size.

One L2M grant recipient is the University of Southern California (USC) Viterbi School of Engineering, which has published results of its exploration into bio-inspired AI algorithms. In the March cover story of Nature Machine Intelligence, the USC team details its successful creation of an AI-controlled robotic limb driven by animal-like tendons that can teach itself a walking task and even automatically recover from a disruption to its balance.

Behind the USC researchers’ robotic limb is a bio-inspired algorithm that can learn a walking task on its own after only five minutes of “unstructured play,” or conducting random movements that enable the robot to learn its own structure as well as its surrounding environment. The robot’s ability to learn by doing is a significant advancement toward lifelong learning in machines. What the USC researchers have accomplished shows that it is possible for AI systems to learn from relevant experience, finding and adapting solutions to challenges over time.

Another recent paper, this one on swarm intelligence, presents a model that could have implications for achieving more intuitive AI. In the February Proceedings of the (U.S.) National Academy of Sciences, researchers from Harvard University’s John A. Paulson School of Engineering and Applied Sciences and Department of Organismic and Evolutionary Biology outlined a new framework that demonstrates how simple rules linking environmental physics and animal behavior can give rise to complex structures in nature. By focusing on perhaps the best-known example of animal architecture — the termite mound — the theoretical framework shows how living systems can create micro-environments that harness matter and flow into complex architectures using simple rules.

In his own ponderings on termites’ architectural achievements, Maisonnier said, “Nature is clearly perfect at optimization of systems.” There are no complex systems driving the behavior of termites — just simple rules applied to elementary agents. If you correctly choose the simple rules, the end result is a termite mound that has amazingly controlled interior temperature and humidity conditions, even if the outside temperature rises to 42°C. The result is “a system that looks like it has been engineered, but the fact is there is no chief, no king, no queen, nothing like that, and no engineer. It’s just something emerging from natural behaviors.”

DISTINGUISHING TRANSISTORS FROM GATES

So how are termites relevant to developing AI systems? Maisonnier starts with the fundamentals: “If you want to understand how a computer works, you can say it is nothing but a network of transistors. But while you might know the basics of how a transistor works, it’s difficult to relate that at a higher level to how a computer works because the conceptual gap between the two is far too huge.

“However, if you start thinking in terms of intermediary-level functions, like gates, then based on these, it is now easier to understand how the computer is working. And those functions can be produced using networks of transistors. The importance here is the intermediary-level functions — the gates.”

It’s the same with the brain, Maisonnier said: There are elementary components — neurons — that are like transistors, and then the brain is analogous to the computer. Again, he said, the gap between the two is huge; you cannot extrapolate how the brain works based on knowledge of the neurons alone.

Maisonnier’s team has developed what the startup calls Organic AI technology to transform sensors into smart sensors. (Image: AnotherBrain)

“But in the brain, you have functions called cortical columns. If you understand what those columns are doing, which is known science, then you can reproduce these functions — either through networks of neurons, neural networks, which is very difficult, or directly by classical electronics. At AnotherBrain, we are using classical electronics to reproduce functions that are the functions of the cortical column.”

Instead of considering the human brain at the microscopic/neuron level, as in deep learning, AnotherBrain is replicating the brain’s behavior at a more macroscopic scale, wherein large neuronal populations have a dedicated function such as the perception of motion or curvature. “We are focused on the creation of an ecosystem rather than just the technology,” said Maisonnier.

Rather than replace machine-learning techniques, AnotherBrain seeks to complement them. “There are a lot of applications that are very finely served by deep learning and neural networks,” Maisonnier said. Deep learning will remain relevant for many applications, he said, but there are other applications that really need intelligence — that have to understand what’s happening and how to behave in a messy and unpredictable world.

AnotherBrain has developed what it calls Organic AI technology to transform sensors into smart sensors for use in the industrial automation, internet of things (IoT), and automotive markets. The startup already has the technology running on a GPU to solve customer problems in industrial automation, said Maisonnier, and the next target application is autonomous driving.

AnotherBrain’s technology is meant to be explainable by design and operates at the edge using minimal energy and data resources to deploy AI more effectively in factories and logistics. The company’s products include real-time manufacturing robot guidance and one-shot-learning visual inspection. The technology is based on a set of bio-inspired algorithms that allow sensors to understand their surroundings and autonomously learn from them while explaining to the user the decisions made. ■