reported by Hindustan Times: The team from Massachusetts Institute of Technology (MIT) developed a new chip designed specifically to implement neural networks.
It is 10 times as efficient as a mobile GPU (Graphics Processing Unit) so it could enable mobile devices to run powerful AI algorithms locally rather than uploading data to the Internet for processing.
The GPU is a specialized circuit designed to accelerate the image output in a frame buffer intended for output to a display.
Modern smartphones are equipped with advanced embedded chip-sets that can do many different tasks depending on their programming.
GPUs are an essential part of those chip-sets and as mobile games are pushing the boundaries of their capabilities, the GPU performance is becoming increasingly important.
Neural nets were widely studied in the early days of artificial intelligence research, but by the 1970s, they had fallen out of favor. In the past decade, however, they have come back under the name “deep learning.”
“Deep learning is useful for many applications such as object recognition, speech and face detection,” said Vivienne Sze, assistant professor in MIT’s department of electrical engineering and computer science, in a MIT statement.
The new chip, which the researchers dubbed “Eyeriss,” can also help usher in the “Internet of things” – the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have sensors that report information directly to networked servers, aiding with maintenance and task coordination.
With powerful AI algorithms on board, networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet.
The team presented their findings at the “International Solid State Circuits Conference” in San Francisco recently.
At the conference, the MIT researchers used “Eyeriss” to implement a neural network that performs an image-recognition task. It was for the first time that a state-of-the-art neural network has been demonstrated on a custom chip.
“This work is very important, showing how embedded processors for deep learning can provide power and performance optimizations that will bring these complex computations from the cloud to mobile devices,” explained Mike Polley, senior vice president at Samsung’s mobile processor innovations lab.
No comments:
Post a Comment