Researchers Use Light to Improve AI Computation Speed


With this new method, there could be a much higher rate of transfer between a processor and memory, while also lowering the energy costs.

Researchers at George Washington University have published a new approach for artificial intelligence computation optimization, which utilizes light instead of electricity.

According to the paper, published in Applied Physics Reviews, the new method is able to perform two to three orders of magnitude higher than a typical electric tensor processing unit (TPU).

SEE ALSO: How AI is Enabling Pervasive Biosensing

“We found that integrated photonic platforms that integrate efficient optical memory can obtain the same operations as a TPU, but they consume a fraction of the power and have higher throughput,” said assistant research professor, Mario Miscuglio. “When opportunely trained, the processors can be used for performing interference at the speed of light.”

Transmission of data between the processor and memory is considered one of the biggest performance issues. With this new method, there could be a much higher rate of transfer, while also lowering the energy costs.

Academics from all over the world are attempting to reduce the energy costs, while improving the performance, of the hardware that goes into training, testing, and running AI models.

Just this week, the Institute of Theoretical Computer Science at TU Graz unveiled a new optimization model, which aims to run AI models on a millionth of the energy usually dedicated to them.

Whether either of them can be scaled to work with the largest AI models will be the real test.

David Curry

About David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Leave a Reply

Your email address will not be published. Required fields are marked *