What’s an in-memory data grid used for? And how does it differ from streaming technologies? A podcast with Dr. William Bain, CEO of ScaleOut Software.
Businesses have turned to in-memory technologies to speed analysis of real-time data. In this podcast, ScaleOut Software discusses how an in-memory data grid enables operational intelligence:
To listen, hit the play icon below:
Justin: Hello, and welcome to the first addition of the RT Insights “Real-Time Talk” podcast series. I’m Justin Grammens, industry analyst at RTInsights, your host for this ongoing discussion about the Internet of Things, real time analytics, and cognitive computing solutions that provide significant business value.
Today, our focus is on operational intelligence. The ability to discover actionable insights and live operational data. Please join me in welcoming Dr. William Bain, founder and CEO of ScaleOut Software, a provider of in-memory computing for operational intelligence.
Welcome, Bill. Thank you for joining us.
Bill: Good morning, Justin. Thank you very much for having us join your call.
Justin: Let’s just jump right into it. If you could tell us a little bit about your company and what was the impetus behind its founding?
Bill: Sure, yes. ScaleOut Software was founded back in 2003 and we introduced our first products in 2005. The goal of founding the company was to help our customers scale the performance of their applications so that they could handle very large work loads with low latency. We provide distributed in-memory data storage and integrated computing to help our customers deliver results much more quickly than they other wise could and handle large work loads. Whether they be e-commerce, financial services, or other applications.
Justin: Excellent. You touched on a little bit of its features. What are you seeing in the industry? Specifically around certain problems that it addresses in the area of real time operational intelligence?
Bill: Well, what we see is the trend towards Big Data, which has been evolving now for the last eight years. We see companies that have gotten into Big Data to look at data trends and changes that are occurring, and be able to feed that back to customers. The challenge has been that Big Data techniques, like MapReduce and Spark, generally work in the data warehouse, offline, on static data that’s been pulled from a live system, analyzed offline, and then results fed back to the live system, maybe overnight or after a period of hours. What we find that our customers need is the ability to provide instant, immediate feedback for their fast-changing data.
Some examples would be in financial services system … If you’re looking at stock trades that are occurring as the market fluctuates during the day, you want to be able to look at your portfolio and make instant identification of areas which need trading to re-balance that portfolio. These applications apply in many other areas, like medical applications and e-commerce. We can go into some of those as well.
Justin: A lot of this stuff, I’m assuming, is kind ringing true with business owners and others at the executive level? Is that where you’re seeing a lot of interest?
Bill: Absolutely, because as people deploy live systems that need to handle large numbers of customers, such as e-commerce systems that are doing shopping. As you’ve seen over the Christmas shopping season, how the explosion in e-commerce is continuing to grow. For example, to make flash sales, to respond to which brands are selling best in an e-commerce setting.
You see the same challenges for the Internet of Things, medical systems, financial services, and countless other application areas.
Related: Use cases for in-memory data grids
Justin: Cool. What’s unique about your solution versus some of the other competitors people would see out there?
Bill: Well, because our technology derives from in-memory data storage … we call it in-memory data grids …in-memory computing that’s been integrated into this mission-critical, highly available computation environment. We have the ability to work within a live system and provide fast feedback to that system. Most of the techniques in this area, which can be generally called data-parallel computation, have really been developed for offline use in the data warehouse.
What’s really unique about us is our taking the technology that was developed back in 2005 and has been evolving since then — of in-memory data grids — and pushing that to provide operational intelligence by integrating computation into that technology. Our ability to integrate the computation in a way that delivers very fast results with minimum latency is what makes it uniquely valuable for these live systems.
Justin: In-memory data grids have been evolving, it feels like changing over the last couple years. They’re no longer this kind of distributed hash table model that I have seen in the past. What are some features that your customers maybe have been asking for or pushing you guys to develop around this in-memory data grid?
Bill: Well, what are customers need is both very fast results, low latency, so that they can provide results quickly enough to respond to a situation that’s occurring in the moment. In addition, these systems have to be highly available. They are always running. They’re not subject to a going down due to a server failure or something of that sort. This ability to integrate high availability with low latency and then scale that to run on tens or hundreds of servers handling hundreds of thousands or even millions of customers — that’s what’s unique about this technology and what you don’t find in the technologies and the data warehouse.
Justin: Bill, how does your technology differ from classic streaming technologies, such as Spark Streaming?
Bill: That’s a great question, Justin. The difference is really quite important. Classical streaming technologies that you find in complex-event processing systems or Spark Streaming more recently — what they do is they look at the stream of data flowing in from the live system and analyze that stream of data to find insights. Now, what we do with our in-memory data grid technology is we model the state of the live system and we use the streaming data to enhance that live in- memory model. That allows us to provide deeper introspection than you would typically find in a streaming system, because the model of the live system is evolving over time as the streaming data flows in. The result is deeper introspection and better feedback to the live system.
Justin: Well thank you for your time, Bill. Fascinating stuff you guys are doing in this area of operational intelligence. I really enjoyed our conversation and look forward to talking with you again in the future.
Bill: Well, thank you Justin, very much. I appreciate the opportunity.
Want more? Check out our most-read content:
White Paper: How to ‘Future-Proof’ a Streaming Analytics Platform
Research from Gartner: Real-Time Analytics with the Internet of Things
E-Book: How to Move to a Fast Data Architecture
The Value of Bringing Analytics to the Edge
Three Types of IoT Analytics: Approaches and Use Cases
Video — Plat.One CEO: Enterprise IoT Doesn’t Have to Be Hard
Three Types of IoT Analytics: Approaches and Use Cases
Liked this article? Share it with your colleagues!