Before deploying an application or service, it’s critical to take a close look at the interconnected edge each provider offers.
The interconnected edge—a distributed computing model that combines edge computing with an interconnected network architecture—evolved around the turn of the century as carriers needed a way to connect with each other’s wireless, wireline, cable, and fiber networks. Carrier “hotels” began to emerge as network providers moved their networks into data center suites in large metros at addresses such as One Wilshire in Los Angeles, 111 8th Avenue in New York, and 350 East Cermak Road in Chicago.
Today, these data centers and others across the country provide a way for carriers as well as other businesses that generate and consume large amounts of traffic to peer in order to reduce the latency of Big Data analysis, cognitive computing, streaming videos, applications, and Internet of Things (IoT) data packets.
True interconnection—where networks peer with each other in the same data center—can only happen, by definition, in locations where disparate networks can physically “touch.” Some metros have a rich ecosystem of peering and packet exchanges. In other cases, a metro area may have 20 or more networks, but the carriers have not yet agreed to peer with each other at a carrier hotel location. They’re just there to serve a gigabit of bandwidth to a certain customer and don’t want to extend their peering fabric to the entire metro.
How to evaluate an interconnected edge
That’s why before deploying an application or service with one of the data center providers operating in a carrier hotel, it’s critical to take a close look at the interconnected edge each provider offers. You want to search for a network that actually delivers the connectivity performance and network destinations you require.
First, identify the networks your applications or IoT devices run on and the carrier hotel in that metro area where those networks peer. This will give you a neutral location to build a hub that extends to those networks and create a fabric that replicates as a multi-physical location.
From there, take a close look at the providers in that carrier hotel. You may need your provider to coordinate how the network carriers in their data center suites handle peering. For example, if you’re in Minneapolis and run an application on a wireless network that peers in Chicago, you ideally want your data center provider to convince the network carrier to peer in Minneapolis.
Otherwise, the network that the application runs on will have to send packets to Chicago, only to send them back to Minneapolis before the application can respond to end-users. This is often referred to as “scenic routing” and increases latency—thus undermining the benefits of interconnection. You have Layer 1 and Layer 2 connectivity, but Layer 3 connectivity occurs more than 400 miles away.
Avoid walled garden pricing when deploying interconnected edge
Many office buildings in large metros are connected by fiber to a data center located in that metro. However, just because that “last mile” of physical fiber connects to the data center, it doesn’t mean the root server hosting the application service sits in that same data center. The actual node for the service could exist in another city or state, so the network provider has to back-haul the data packets to that city, get the required information, and bring the packets back.
When considering all the networks that support your applications and IoT devices, you likely won’t find a data center provider in a carrier hotel who can put all your eggs in one basket. Nonetheless, you do want to partner with a provider who can convince your primary network providers to keep data packets local.
Should you find such a data center partner, carefully check its carrier connection pricing model. Some offer “walled garden” interconnectivity, where you pay a monthly access fee to each network. These providers discriminate against connectivity terminating outside their data center location. If you need an extended cross-connect from the suite of that provider to another provider’s suite at the same carrier hotel, they will tax you for the connection. This is called “long strawing.”
Conversely, forward-thinking data center providers realize customers need to deploy applications and device networks in different locations for various reasons. They take the approach of focusing on solving customer challenges by facilitating efficient and cost-effective means to interconnect so their customers don’t have to pay exorbitant fees just to run their daily operations.
For customers with a national presence, it’s best to partner with a data center provider that can offer multiple carrier hotel data center suites in major metros across the U.S. Tap into one that offers diverse connectivity options that can support millions of people consuming digital services and millions of IoT devices transmitting data packets.
Carrier migrates to a new suite to reduce data center costs
An example of a carrier that leveraged its options at a carrier hotel recently took place at 111 8th Avenue in New York. The carrier has a presence within the building with multiple data center providers and requires direct connectivity to other carriers, network service providers, and internet service providers.
The carrier also operated a network in a suite managed by a data center provider that had legacy equipment requiring a technology refresh. The carrier knew the technology refresh would reduce the footprint that this network required and, before moving into a new data center, considered multiple data center partners with smaller suites. The carrier chose the data center provider that offered favorable pricing terms (e.g., no walled gardens) and could move quickly in providing data center space in their suite.
After migrating to one of the provider’s suites and refreshing the hardware, the carrier connected the suite to its outside plant network facilities. This created an aggregation hub for all services with connectivity re-directed from the previous suite to the new space. This approach also takes advantage of the interconnections with all the necessary suites managed by other providers in the building.
Interconnected edge: A foundation for distributed computing
Some markets don’t evolve from an interconnection perspective because the network carriers insist on peering applications and mobile/IoT devices somewhere else. They haven’t figured out how to underwrite a business case to drain traffic from one city and make it native to another city—where instead of the primary city being a huge drain where all traffic must flow into the main city for peering and is sent back out, it happens in a distributed fashion. From a cost perspective, the carriers would rather expand their main data center than build new locations.
Interconnection relies heavily on establishing redundant hotel carrier data centers in the major metro areas of the U.S. While many markets have interconnected points, that is just step one. Step two requires data center providers in carrier hotels to work with their carrier networks to minimize the number of data packets sent to another metro to get peered.
Ultimately, this approach to distributed computing and interconnection will allow carriers and data center providers to scale application performance efficiently and offer connectivity diversity. The interconnected data centers can then serve as the foundation to ensure customers can deliver the application performance their end-users require.