Companies like Google have made breakthroughs in image and face recognition through deep learning, using giant data sets and powerful computers (see “10 Breakthrough Technologies 2013: Deep Learning”). Now two leading chip companies and the Chinese search giant Baidu say hardware is coming that will bring the technique to phones, cars, and more.
Chip manufacturers don’t typically disclose their new features in advance. But at a conference on computer vision Tuesday, Synopsys, a company that licenses software and intellectual property to the biggest names in chip making, showed off a new image-processor core tailored for deep learning. It is expected to be added to chips that power smartphones, cameras, and cars. The core would occupy about one square millimeter of space on a chip made with one of the most commonly used manufacturing technologies.
Pierre Paulin, a director of R&D at Synopsys, told MIT Technology Review that the new processor design will be made available to his company’s customers this summer. Many have expressed strong interest in getting hold of hardware to help deploy deep learning, he said.
Synopsys showed a demo in which the new design recognized speed-limit signs in footage from a car. Paulin also presented results from using the chip to run a deep-learning network trained to recognize faces. It didn’t hit the accuracy levels of the best research results, which have been achieved on powerful computers, but it came pretty close, he said. “For applications like video surveillance it performs very well,” he said. The specialized core uses significantly less power than a conventional chip would need to do the same task.
The new core could add a degree of visual intelligence to many kinds of devices, from phones to cheap security cameras. It wouldn’t allow devices to recognize tens of thousands of objects on their own, but Paulin said they might be able to recognize dozens.
That might lead to novel kinds of camera or photo apps. Paulin said the technology could also enhance car, traffic, and surveillance cameras. For example, a home security camera could start sending data over the Internet only when a human entered the frame. “You can do fancier things like detecting if someone has fallen on the subway,” he said.
Jeff Gehlhaar, vice president of technology at Qualcomm Research, spoke at the event about his company’s work on getting deep learning running on apps for existing phone hardware. He declined to discuss whether the company is planning to build support for deep learning into its chips. But speaking about the industry in general, he said that such chips are surely coming. Being able to use deep learning on mobile chips will be vital to helping robots navigate and interact with the world, he said, and to efforts to develop autonomous cars.
“I think you will see custom hardware emerge to solve these problems,” he said. “Our traditional approaches to silicon are going to run out of gas, and we’ll have to roll up our sleeves and do things differently.” Gehlhaar didn’t indicate how soon that might be. Qualcomm has said that its coming generation of mobile chips will include software designed to bring deep learning to camera and other apps (see “Smartphones Will Soon Learn to Recognize Faces and More”).
Ren Wu, a researcher at Chinese search company Baidu, also said chips that support deep learning are needed for powerful research computers in daily use. “You need to deploy that intelligence everywhere, at any place or any time,” he said.
Being able to do things like analyze images on a device without connecting to the Internet can make apps faster and more energy-efficient because it isn’t necessary to send data to and fro, said Wu. He and Qualcomm’s Gehlhaar both said that making mobile devices more intelligent could temper the privacy implications of some apps by reducing the volume of personal data such as photos transmitted off a device.
“You want the intelligence to filter out the raw data and only send the important information, the metadata, to the cloud,” said Wu.