Why Intelligence at the Edge is No Longer Optional

PinIt

By processing data locally, organizations can filter and act on the most important insights immediately, while sending only relevant or aggregated data upstream to the cloud for longer-term analysis and storage.

AI and intelligence at the edge have become critical because of the fundamental limitations of centralized processing in today’s data-heavy world. The proliferation of sensors, IoT devices, visual systems, and other devices along production lines, in autos, embedded into industrial devices, and more, generates ever-larger data volumes. Much of that data needs to be analyzed or run through AI models to make fast and intelligent decisions.

Sending all this information back to centralized data centers for processing creates unavoidable bottlenecks. Latency is the most obvious constraint. Milliseconds matter when any system must act in real time.

With autonomous driving, every fraction of a second is crucial. A self-driving car cannot afford to wait for sensor data to travel to a cloud server hundreds of miles away, be processed, and return instructions. Decisions such as braking to avoid an accident must happen at the edge, where data is captured.

Similarly, time is critical when detecting problems along a high-throughput manufacturing production line. Every minute can result in defective products flowing off the line, which must all be re-made once a fix is made.

Bandwidth is another key factor driving the need for distributed intelligence. AI models for video analytics, industrial IoT, and healthcare imaging consume enormous amounts of data. Continuously transporting this information to central facilities for analysis is not only expensive but also unsustainable.

Network operators already face surging traffic from video streaming and cloud applications; adding edge intelligence workloads at scale overwhelms even the most advanced backbones.

See also: Real-Time Visual AI at the Edge: Performance Without Compromise

Making the case for edge intelligence: A look at the numbers

For as long as most data and real-time analytics professionals have been in the industry, there has always been the issue of dealing with rapidly growing data volumes. A recent study puts the impact of that growth into perspective and hammers home the need for edge AI and edge intelligence.

Ciena, in partnership with Heavy Reading (Omdia), conducted a global survey of 77 communications service providers (CSPs) to assess the impacts of AI-driven traffic growth on metro and long-haul networks. The summary report of that survey reveals that CSPs anticipate a dramatic surge in AI traffic within the next three years. For metro networks, 18% of respondents expect AI to contribute more than half of total traffic, while nearly half (49%) expect AI to account for over 30%. Expectations are even higher for long-haul traffic, with 52% forecasting AI to surpass 30% of traffic and nearly a third (29%) predicting AI will contribute more than half of long-haul traffic in that timeframe.

The study highlights the growing role backbone capacity will play in enabling AI connectivity, particularly through high-bandwidth wavelength services. Half of the respondents ranked wavelength services at 100G, 400G, and even 800G as the top service expected to grow the most due to AI demand. By contrast, only 25% saw dark fiber as a primary growth area. Notably, 74% of CSPs identified enterprises, not hyperscalers or cloud providers, as the leading drivers of AI-related network growth over the next three years.

Most importantly, the report underscores that networks are not fully prepared for this impending AI traffic boom. Only 16% of CSPs believe their optical networks are “very ready” for AI, while 39% report they are “ready but still require some work,” and 40% admit their networks are only “somewhat ready.” The top barriers to readiness include capex constraints (38%), challenges around go-to-market strategy (38%), and network management complexity (32%). These findings highlight a critical gap between expected traffic growth and current infrastructure readiness.

Implications for AI and intelligence use cases

With traffic from AI and real-time intelligence applications expected to compete with video, web, and IoT, and to dominate long-haul transport, it is increasingly impractical to backhaul all workloads to centralized data centers. To mitigate costs, reduce latency, and ease pressure on transport infrastructure, these applications must be deployed closer to where data is generated and consumed.

Hence, the processing and decision-making must move to the edge. Specifically, organizations must implement AI and apps that deliver real-time intelligence in a way that processing inference and even selective training is done at the edges. That will unlock the real-time responsiveness that AI-powered applications demand.

By processing data locally, organizations can filter and act on the most important insights immediately, while sending only relevant or aggregated data upstream to the cloud for longer-term analysis and storage.

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *