SHARE
Facebook X Pinterest WhatsApp

Speed, Sustainability, Scale: How Optical Matrix Multiplication Will Transform AI

thumbnail
Speed, Sustainability, Scale: How Optical Matrix Multiplication Will Transform AI

Businessman hand in formal suit touching digital glass screen with information statistics. Concept of high tech approach, recruitment process and logistics

Matrix multiplication forms the backbone of all AI computations. The high throughput at low latency that comes with 3D optical computing is particularly valuable for AI inference tasks.

Oct 21, 2023

The current world of AI is power-hungry and compute-limited. The trajectory of model development is rapid, but with this advancement comes a need for drastically more compute power. Existing transistor-based computing is approaching its physical limits and already struggling to meet these growing computation demands.

Big players are already attempting to address this with the development of their own custom chip solutions. However, the hardware bottleneck may be too severe to overcome with traditional electronic processors. So, how can technology fully enable this exponentially increasing need for compute power?

Matrix Multiplication

In large language models, matrix multiplication is used in over 90% of the compute tasks. Through basic operations of multiplication and addition in a structured manner, matrix multiplication supports the different functional blocks of AI. And it’s not just language models. This basic linear algebra operation is fundamental to almost every kind of neural network: to achieve massive interconnection of neurons, perform convolution for image classification and object detection, process sequential data, and so on. It is a simple concept but integral to efficiently manipulating and transforming the data powering AI and an endless list of other applications, so the importance of matrix multiplication cannot be overestimated.

As AI models become larger, more of these matrix operations must be performed, meaning more and more compute power is needed. To work at the desired performance, even now, electronics are being pushed to their limit. Is there an alternative?

Advertisement

Optical Matrix Multiplication

Optics is already being used in many ways to transform our lives, as most notably seen with optical communication in fiber networks. Optical computing is naturally the next step. Whereas digital electronics requires a huge number of transistors to perform even the simplest arithmetic operations, optical computing performs calculations using the laws of physics. Input information is encoded into beams of light, and matrix multiplication is performed by using the natural properties of optics, such as interference and diffraction. Information can be encoded in multiple wavelengths, polarisations, and spatial modes, allowing for boundless amounts of parallel processing, and the computation occurs, quite literally, at the speed of light.

See also: AI Workloads Need Purpose-built Infrastructure

Advertisement

Adding A New Dimension With 3D Optics

With the end of Dennard’s scaling and Moore’s law, it is time to revisit the fundamentals of computing. Digital electronics are inherently confined to the `2D’ layout – the transistor gates and circuits are fabricated on a wafer, and computation takes place with information flowing between different units on the 2D plane. This 2D computing architecture requires ever-increasing transistor density, causes severe interconnection problems, and suffers from the notorious memory bottleneck. Change in the 2D design has now begun, with the development of 3D-stacked memories, but there is a long way to go for the whole industry to adapt.

Now, optics can completely change the game by performing computation naturally in the 3D space. Adding a new dimension relaxes many of the constraints found in traditional computing. Interconnecting components is easier and far more energy efficient, and it allows ever-increasing throughput (how many calculations can be performed in a given time) without impacting latency (how quickly each calculation can be performed). This is entirely unique to 3D optics: whether you’re multiplying ten numbers together or 10,000, it all happens together at the same time, at the speed of light. This has huge consequences for the scalability of optical processors, allowing them to reach 1000x the speed of current digital processors.

Besides the inherent scalability of 3D optics, the clock speed of optics can provide speeds 100x faster than traditional electronics, and the ability for wavelength multiplexing (using multiple wavelengths of light to process information in parallel) opens the door for another 100x increase. Bringing this all together gives you the ability to exponentially scale computing speed, with higher throughput, lower latency, and increased reliability, that only 3D optical matrix multiplication can offer.

Advertisement

What will this mean for AI?

Matrix multiplication forms the backbone of all AI computations, irrespective of the application. Notably, the high throughput at low latency that comes with 3D optics is particularly valuable for AI inference tasks in data centers, an application that is fuelled by real-time responsiveness and efficiency.

With its remarkable improvements in bandwidth, latency, speed, and scalability, compared to traditional electronics or integrated photonics, combined with the compatibility with existing machine learning algorithms, 3D optical computing is poised to revolutionize all AI applications.

thumbnail
Dr. Xianxin Guo

Dr. Xianxin Guo is Co-Founder and Head of Research at Lumai, an Oxford-based start-up focused on the development of optical neural networks (ONNs). Xianxin leads the development of optical computing technology at Lumai. Prior to co-founding the company, he obtained his PhD in physics from the Hong Kong University of Science and Technology in 2018. Alongside developing Lumai’s cutting-edge technology, Xianxin is currently an RCE 1851 Research Fellow at the University of Oxford and a Stipendiary Lecturer at Keble College. Xianxin has ten years of experience in optics and quantum physics across the UK, Canada, Hong Kong, and China, and is the primary inventor of Lumai’s optical training technology.

Recommended for you...

Why AI Needs Certified Carrier Ethernet
Real-Time RAG Pipelines: Achieving Sub-Second Latency in Enterprise AI
Abhijit Ubale
Jan 28, 2026
Excel: The Russian Tsar of BI Tools
Real-time Analytics News for the Week Ending January 24

Featured Resources from Cloud Data Insights

Why AI Needs Certified Carrier Ethernet
Real-Time RAG Pipelines: Achieving Sub-Second Latency in Enterprise AI
Abhijit Ubale
Jan 28, 2026
Excel: The Russian Tsar of BI Tools
Real-time Analytics News for the Week Ending January 24
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.