Friend or Foe? In Network Security, 
Big Data Holds the Answer

PinIt

Network security threats have become more dangerous, and network operators have to be much more careful and precise in how they meet them.

The world is becoming more digital every day. The operation of fundamental infrastructure processes in business and government increasingly rely on network availability.

Consumers expect the network to not only be available, but also provide them with uninterrupted service. As a result, slow is the new down.

In this context, network security threats become more dangerous, and network operators have to be much more careful and precise in how they meet them. For instance, current methods for dealing with distributed denial-of-service (DDoS) attacks often throttle or even interrupt network services for millions of users.Yet, in the flood of packets implicated in the attack, only a small percent are actually dangerous. This is why the industry has to get better at distinguishing friend from foe.

The obvious answer is better detection and analysis of the billions of packets traversing the network, but the question is how. Until recently, the sheer volume of traffic overwhelmed the level of affordable processing available. However, new approaches to modeling based on complex network mapping and big data using multi-dimensional, real-time analytics are changing the game.

Network security needs more power

Traditional DDoS defense solutions use specialized DPI-based detection appliances to identify abnormally large spikes in traffic flowing toward specific IP addresses. However, these hardware detection devices lack the raw compute power necessary for additional analytics to determine if the surge was a real attack or a valid high-traffic event — such as a large file transfer from Amazon’s AWS. This inability to determine friendly or harmful data leads to false positives and negatives, as real attacks are allowed to pass through and valid application traffic is dropped.

See also: Implementing cybersecurity for the smart grid is a must

Once a bandwidth surge triggers detection appliances, all traffic headed to the identified IP address is re-directed to centralized scrubbing centers where it is cleaned by racks of specialized mitigation appliances. This is a slow and manual process. Keeping up with multi-vector attacks that can change minute-by-minute makes it even more difficult, so hackers are free to cause mass disruption until all the different modes of attack are identified and stopped. As a result, scale and cost are also big problems with this approach. With attack traffic now regularly crossing the 1TB/s level and continuing to grow, providers must continually up their investment in bandwidth and appliances — or deal with insufficient coverage.

The scale of today’s multi-vector attacks is overwhelming for the current approach. It is not only too expensive but, as noted, other friendly traffic gets caught up in the incident as well; disrupting mission-critical applications or slowing down consumer services.

There are much more effective ways to gracefully deal with these scenarios. Software-based, multi-dimensional analytics can detect attacks faster and more precisely than the traditional specialized hardware. Looking, for instance, at ratios of packet types and DNS data streams helps to more precisely identify the nature of a large spike. Multi-dimensional analytics get even better when they have visibility into all cloud applications and services so they can instantly understand where the traffic is originating and whether it is likely to be friend or foe.

This analytic technique essentially maps the complexity of the internet and uses this modeling of internet traffic to better understand what it is seeing. It is a big data approach that catalogs historical traffic patterns, even the patterns of previous attacks, and learns how the internet behaves in very great detail.

Having this kind of modelling gives these multi-dimensional, real-time analytics the visibility inside a large and suspicious traffic surge to discern if it is, for instance, part of a perfectly normal large data transfer from AWS that has crossed paths with HBO’s release of the blockbuster season 2 opener of “WestWorld” — or if it’s a multi-vector attack that is using the same slave PCs, IoT devices and cloud servers that were used in a previous attack.

Critical real-time analysis

Armed with this kind of real-time analysis, it becomes possible to use simple filters like Access Control Lists (ACLs) at the peering edge to drop traffic originating from the captured devices carrying out the attack. The offending traffic is dropped almost as quickly as it is detected before it has an opportunity to enter the network. Combining multi-dimensional analytics with smarter, DDoS-capable routers translates to more precise detection and faster response times, especially since providers now have the means to start automating the mitigation process. There is no need for expensive backhauling and scrubbing to handle the volumetric attacks that constitute well over 90 percent of attack traffic — as these are stopped at the edge of the network before they can impact services or customer experience.

From a bottom-line perspective, network providers can significantly reduce spend in backhaul infrastructure, better leverage their investment in peering/edge routers and refocus scrubbing center equipment to what it does best — complex analysis of application-level attacks that constitute the remaining 10 percent of attack traffic.

With the world’s increasing dependence on available network services — for everything from entertainment to mission-critical infrastructure applications — being able to distinguish data precisely between friend and foe is of high importance. As the sophistication and scale of attacks grows, it is time for a more intelligent approach to ensuring the safety and reliability of network services.

Naim Falandino

About Naim Falandino

Naim Falandino is a data scientist, software developer, and engineering manager with more than ten years of experience building analytical data pipelines and applications. As Chief Scientist of Nokia Deepfield his expertise in real-time analytics, machine learning, and information visualization is being applied to build products that solve the networking industry’s most complex problems in security, performance, and management. Prior to Deepfield, Naim was a Data Scientist in Ford Motor Company's Systems Analytics research group, and Lead Software Architect at Covisint. He holds a MS in Information Science from the University of Michigan, and a BS in Computer Science from Michigan State University.

Leave a Reply

Your email address will not be published. Required fields are marked *