MIT Scientists Attempt To Make Neural Networks More Efficient

PinIt

MIT CSAIL has built a new framework, aimed at predicting how well neural networks perform at scale and what resources are necessary.

To maintain an adequate performance level, deep neural networks require lots of energy from data centers. One of the key reasons DeepMind accepted Google’s acquisition offer was the wealth of data center capacity that they could access for free.

Not everyone can access Google’s gigantic data center stock, which is why MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) built a new framework, aimed at predicting how well neural networks perform at scale and what resources are necessary.

SEE ALSO: Deep Learning AI Accurately Identifies Sleep Disorders

Through this new framework, data scientists will be able to determine whether it’s worth investing more time and money into a system. It could also provide businesses with more accurate financial projections.

“Our approach tells us things like the amount of data needed for an architecture to deliver a specific target performance, or the most computationally efficient trade-off between data and model size,” said MIT professor Nir Shavit, who co-wrote the new framework.

“We view these findings as having far-reaching implications in the field by allowing researchers in academia and industry to better understand the relationships between the different factors that have to be weighed when developing deep neural networks, and to do so with the limited computational resources available to academics.”

According to the paper, the framework is capable of accurately predicting performance with 50 times less resources. The team aims to look deeper into what makes a specific algorithm succeed or fail at scale, which may provide further analyzation for businesses and data scientists.

David Curry

About David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Leave a Reply