It’s Still Early Days for AI

“We need to get to real AI because most of today’s systems don’t have the common sense of a house cat!” The keynoter’s words drew chuckles from an audience of 3,000 engineers who have seen the demos of systems recognizing photos of felines. There’s plenty of room for skepticism about AI. Ironically, the speaker in this case was Yann LeCun, the father of convolutional neural networks, the model that famously identified cat pictures better than a human.

It’s true, deep neural networks (DNNs) are a statistical method — by their very nature inexact. They require large, labeled data sets, something many users lack. It’s also true that DNNs can be fragile. The pattern-matching technique can return dumb results when the data sets are incomplete and misleading results when they have been corrupted. Even when results are impressive, they are typically inexplicable.

The emerging technique has had its share of publicity, sometimes bordering on hype. The fact remains that DNNs work. Though only a few years old, they are already being applied widely. Facebook alone uses sometimes simple neural nets to perform 3 × 1014 predictions per day, some of which are run on mobile devices, according to LeCun.

Cadence and Synopsys both have reported projects using them to help engineers design better chips. Intel helped Novartis use them to accelerate drug discovery. Siemens is using them to accelerate processing of medical images, and scientists are using them to speed up reading genomes of cancer patients.

DNNS are being employed to identify snow leopards in the wild threatened with extinction. And they even help brew beer in England and Denmark. Deep learning is with us to stay as a new form of computing. Its applications space is still being explored. Its underlying models and algorithms are still evolving, and hardware is trying to catch up with it all.

It’s “a fundamental transformation of the computing landscape,” said Chris Rowen, a serial entrepreneur who is “100% all in on deep learning” as CEO of startup BabbleLabs, which develops models for audio applications. “I used to write an algorithm and tell a system what to do,” said Rowen. “Now, we have a class of methods not described by an explicit algorithm but a set of examples, and the system figures out the pattern.

Early work in computing with neural networks dates back to the 1950s. (Image: ISSCC, Yann LeCun)

“Just about everything software touches can be influenced by this new method, especially where the data is most ambiguous and noisy — where a conventional programmer could not distinguish relevant from irrelevant bits.”

It’s early days for this revolution in computing, forcing designers to pull out all the stops in the quest for more performance. Researchers at the non-profit OpenAI center said that hardware needs tenfold performance improvements a year to keep up with the needs of training DNNs.

“That’s an amazing requirement … [chips] have to take a scale-up approach; you can’t do 10× a year any other way,” said David Patterson, a veteran researcher whose Turing Award lecture with John Hennessey dubbed it a golden age in computer design, in part driven by deep learning.

In this special report, we give a glimpse of the status and outlook of the chips, the software, and the uses of this emerging technology. ■