In Big Data, Mean Time is Money

PinIt
Big Data

To begin to unlock the true value of their data and reduce Mean Time to Decision (MTTD), organizations need to eliminate their big delays—and that means new approaches to Big Data analysis and entirely new processing architectures. Here, RTInsights contributor Patrick McGarry explains why investing in platforms based on FPGA technologies will be critical to future success.

The old adage “time is money” is a cliché but, in an era when transactions are measured in fractions of a second, it’s actually truer than ever before. In pharmaceuticals, bringing a new product to market just one day earlier can result in millions of dollars in additional revenue. This is even truer in markets such as financial services. Or consider time-sensitive applications such as fraud, cyber crime or national security where immediate data processing can have immense value—to the extent that lives and livelihoods can be saved.

So, if processing and analyzing today’s data today holds more value than processing today’s data tomorrow, why do companies wait days, weeks or even months to get critical business insights from their data? Because they have to.

These big delays are due to today’s technology choices that require time-consuming data preparation and complex data analysis frameworks. Businesses have had to settle for a slow status quo, which means their MTTD—the time it takes to make data-driven decisions from raw data sources—is measured in weeks or sometimes months, instead of the minutes or seconds that are required.

The longer your MTTD, the greater the risk your Big Data initiative will fail to:

  • Answer the questions you need to answer
  • Deliver the most value to the business with fast, actionable insights
  • Provide accurate views into essential data to solve real problems and inform decisions

 Deriving value from business data means being able to achieve real-time insights for industries ranging from financial compliance to eDiscovery to cyber security. And real-time information should be measured in fractions of seconds—not days or weeks.

Reliance on Old Architecture Hobbles Big Data Agility

When fractions of seconds count, enterprises can’t make changes if they are stuck with the slow technology that defines the status quo. Current data pipelines are reliant on legacy x86 architectures—an approach that does not work in a high-velocity, high-volume data era that is growing exponentially due to the Internet of Things (IoT) and the accompanying explosion in tweets, photos and other data sources.

Complexity is also an issue. The vast majority of data has some sort of human element and is not easily parsed or dealt with by machines that were designed to deal with binary 1s and 0s. With the emerging IoT market and the associated data explosion that IoT is driving, the problem is only set to get worse.

Today’s data analysis solutions aren’t keeping up. They can’t keep up with the volume and velocity demands of Big Data because they are hampered by bottlenecks including:

  • Sluggish data analysis (because of limited processing power)
  • Extensive time to transform and load data
  • Expensive data indexing operations
  • Relatively slow networking speeds
  • Interaction of complex software stacks
  • Complex analysis requirements

IT has done their best to compensate for these issues by cobbling together technologies to analyze data faster but it is clear that this approach isn’t working. Hadoop and Spark have been revised (again and again) to try to deal with these and related challenges. But to see any kind of performance gains, organizations have had to build and maintain sprawling server clusters—a strategy that leads to a new set of bottlenecks yet still does not address the growing, critical need of being able to simultaneously analyze both batch (data at rest) and streaming data in a single, unified platform.

The High Cost of Slow MTTD Bottlenecks

Is real-time analysis of data really that much more valuable than the wait times that we have grown accustomed to accepting? The answer is unequivocally yes, and companies who embrace the concept of real-time analysis of Big Data (whether batch, streamed or both) will find themselves at a precipice of being able to finally use their data to address a wide assortment of challenges.

In order to do this, the solution isn’t to update current technology. It’s to re-architect a complete, high-performance data analysis solution from the ground up. That’s why we are seeing moves by the big technology vendors to architect their own hardware, even down to the chip level.

Driving growth, reducing costs, gaining competitive advantages and unlocking answers not accessible in the past are imperative for today’s businesses. But to do so, organizations must rip delays from the entire process. Delays cost money, reduce the value of answers and make management lose faith in Big Data initiatives. And, unfortunately, today’s Big Data architectures walk hand in hand with big delays. The delays have to go, and that means that the architectures that are the genesis of the delays have to go.

Big Data Needs a Purpose-Built Architecture

Compelled by the relentless and increasing volume and velocity of data, there has been a renewed interest in using field-programmable gate arrays (FPGAs) to execute operations in parallel for the real-time performance needed to drive decision making in the enterprise. Users of vast and complex data are looking to FPGA-based systems to speed the entire life cycle of data analysis by eliminating bottlenecks such as Extract, Transform and Load (ETL) and indexing required by x86 architectures.

The good news is that well-architected, FPGA-accelerated systems are now available and offer several benefits. When properly architected and used with open application programming interfaces (APIs), what was once a complex hardware programming exercise is now abstracted away with a simple and intuitive interface. This leads to their primary benefit: extensive performance gains.

For targeted analytics applications, FPGA-based systems can easily achieve performance numbers that are two orders of magnitude or greater when compared to servers typically found in today’s data centers. Since these systems are high-performance, easy to use and cost-favorable in comparison to large contemporary clustered solutions, many industries would benefit—especially financial services, life sciences for disease research and national security.

For instance, in the financial space (where large clusters are routinely deployed for a variety of use cases including high-frequency trading and compliance), an FPGA-based system can provide extreme leaps in performance where seconds truly count when making a trade or meeting compliance targets.

Putting the Real Back In Real-Time

The key to Big Data success is reducing MTTD—the period between data creation and getting actionable insights. Shrinking MTTD results in a profound and positive impact on the bottom line, competitiveness and overall efficiency.

Delays, uncertainty and rigidity are the enemies of success. To unlock insights trapped in massive volumes of data, analysts need tools that keep pace with the current speed of business. They must overcome the inherent bottlenecks of conventional analysis tools and bloated x86 architectures.

 To begin to unlock the true value of their data and reduce MTTD, organizations will need to eliminate their big delays—and that means new approaches to Big Data analysis and entirely new processing architectures. Investing in quick-to-deploy, higher performance and more agile platforms based on FPGA technologies will be critical to future success.


Want more on this topic?

Research from Gartner: Real-Time Analytics with the Internet of Things
From the Center to the Edge: The IoT Decentralizes Computing
For Manufacturers, IoT Means the ‘Internet of Tools’
Becoming an ‘Always On’ Smart Business

Liked this article? Share it with your colleagues using the links below!

 

Patrick McGarry

About Patrick McGarry

Patrick McGarry brings extensive technology and leadership experience in hardware and software engineering (full bio). He is currently Vice President of Engineering at Ryft.

Leave a Reply

Your email address will not be published. Required fields are marked *