Real-time systems can have development challenges due to their complexity, distributed nature, and the need for analysis to be done in real-time.
Real-time systems are becoming critical across a gamut of industries. Applications range from cybercrime prevention to logistics to smart buildings and cities. The booming demand and the desire to deploy such applications on distributed architectures greatly complicates development efforts.
RTInsights recently sat down with Marty Sprinzen, CEO and Co-founder, of VANTIQ, which offers a high-productivity development platform for event-driven, real-time collaborative applications. We discuss obstacles businesses face when trying to develop real-time systems, technologies, and methodologies that help, and more. Here is a summary of our conversation.
RTInsights: How are real-time systems different today than ten years ago?
Sprinzen: Real-time systems are just now coming into existence in a general sense. Ten years ago, the only real-time systems were for some very specific needs. You would see them used in manufacturing. There were some logistics applications. FedEx had real-time systems for moving packages. But these applications were very limited. The systems themselves were installed in one location, isolated, and not connected to the internet typically. Those uses were cool.
Now we’re talking about real-time systems that can monitor anything, anywhere. It’s a whole new world of computing. It’s a new generation of applications. Real-time systems are going to make everything we’ve done so far with computing look limited. There is now a much bigger market and opportunity because software is going to run everywhere, analyze everything, and potentially, with humans, hopefully, control things. It is a whole new world of computing, and very few people realize the impact these systems will have.
RTInsights: What are the main development challenges of real-time systems versus non-real-time systems?
Sprinzen: The biggest challenge is complexity. Software is going to run everywhere, and they’ll naturally be distributed. You’ll have software running on the cloud, edge computers, and even smartphones.
Another area of complexity is, by definition, that real-time systems obviously must be fast. They must react in real time. Therefore, everything from analysis to the use of AI to how the information is combined with information from elsewhere must be done in real-time. And that’s a whole new level of complexity.
RTInsights: What are the common obstacles and pitfalls when developing real-time systems?
Sprinzen: When you have these distributed systems that are far more complex, they typically use many different software infrastructures. For example, a system might use some type of event brokering, maybe Kafka. It also may use dispatching on the servers, like in Amazon’s Lambda. And the systems must communicate everywhere in real time.
So, developing real-time systems means combining lots of different products, often, and that’s highly complex to do. Kafka alone is very complex. When you add other products to it, and you’ve got to manage all this infrastructure, the complexity increases. The world was a lot easier when there were those big mainframes, you know, 30, 40 years ago, running CICS. Then you had one system that did everything for you.
That’s no longer the case. Over the last few decades, complexity has grown dramatically. Real-time systems now add even more complexity. You have lots of different products and lots of physical environments that must be combined. And I didn’t even mention all the sensors and the security that they will require.
RTInsights: How do you navigate the software development lifecycle for real-time systems?
Sprinzen: Like the systems, the software development lifecycle is more complex. Upfront, how do you do requirements definition? If you’re dealing with lots of events, you’ve got to define when those events occur, what happens, what other events are integrated with them, and then build the software from that.
Then, at the other end of the lifecycle, you’ve got to do testing. Distributed testing can be a nightmare. How do you test these systems? And operationally, how do you deploy these systems? Let’s say you have a thousand edge computers. You’re not going to deploy a new release to all thousand, hopefully. You might deploy it to a few of them, testing some of them or maybe five production systems, and see how it runs.
So, every element of the development lifecycle is far more complex. Its requirement’s definition is far more complex. DevOps is far more complex. Testing is far more complex.
There is a solution. Raise the abstraction levels so that you don’t see all these complexities when you’re developing the applications. Although a system might be distributed and running in the cloud, edge, and more locations when you develop the system, wouldn’t it be nice to develop it as if it’s going to run in one cluster, one computer system? Then you break up the application (what we call partitioning) to put the appropriate parts on the edge and cloud.
RTInsights: What other issues should businesses be concerned about?
Sprinzen: While more software must be developed (more complex software) to use real-time applications, there is less expertise available. Demand for developers and data scientists is growing. So, we have these dynamics in the market.
The only way to address these dynamics, we believe, is to raise the abstraction level, so it is easier to develop real-time applications. What’s needed is low code or agile development environments. They simplify development.
There’s much more software that will need to be developed over the next few decades. Software is going to be running everywhere. You have more software, less expertise, and fewer people working in IT departments, building these kinds of systems. You must hide the complexity.
But it’s not just painting a screen, you know, a WYSIWYG type of application. That certainly is part of it. But it’s more. You need to hide the underlying complexities. The developer isn’t dealing with an advanced event broker that does analysis everywhere. Instead, a low code approach lets the developer work as if the system is one system, running in one location, and then distributing it to your physical environment. And that is what VANTIQ is about. That’s what we do. We hide the complexities of these systems.
RTInsights: How does VANTIQ have this expertise and technology?
Sprinzen: My background is software. I was a VP of engineering at Ingres. Before that, I created system monitoring at Candle. This goes way back into the eighties. I have a long history in the software industry, starting in the eighties when the industry started. But the area of expertise that I would say I am strongest in is application development and deployment. Going way back in Ingres days, we developed something called ABF, application by forums, read the database schema, and created a default application.
Then we created a WYSIWYG product Windows 4GL. I also was the founder of a company called Forte back in the nineties. And Forte was a big success, enabling the internet to build mission-critical apps like New York City’s 911 applications, home banking, and more. These were all applications that previously were not available.
Today’s challenges are far greater than all the challenges we faced back then because the environments are far more complex, and people don’t realize this. They don’t get that. The truth is a new generation of systems, of platforms, need to be created to build today’s and future distributed real-time applications.
Paul Butterworth, my co-founder, and I have the expertise to do this. Right now, we are alone in the market. We were kind of alone at Forte, too. But because of our experience of building all these application development environments that are naturally distributed in the end and have very high-level constructs to build the applications, we could carry that into this company.
And because of that, there is no competition right now. Some companies are trying to build these applications by taking four or five different products and putting them together. The failure rates, according to Forbes, are 72%. We believe we are at the right place at the right time, just as the market for these new-age, real-time applications is coming into existence.
RTInsights: What is in the future?
Sprinzen: Over the next decade, it’s going to be really interesting. We are getting partners, such as big Telcos, who are building out their MECs [multi-access edge computing]. They’re going to have computers everywhere. And we’re talking to them because they know with us, we’re a combination. We’re half application and development environment, and under the covers were half infrastructure. We know about the physical distribution and management of the logic, which could be different on different edge computers.
Our system is so dynamic, you can change what runs where in real time. The system can be designed to monitor itself. For example, if the cloud information is not getting to the cloud quick enough, you could move some analysis to the edge and change it in real-time. That goes back to my Candle days when monitoring systems was very important.
One thing we’re seeing because of the capabilities of our software is that companies are looking at things that they’ve never done before. We have one company, a very large one that manufactures air conditioners, and what they’re looking at doing is selling air quality as a service. The service would monitor temperature, humidity, and things like the quality of the air itself. They could then manage different devices, not just the air conditioning, to adjust the air quality.
These are the kinds of changes that we’re seeing in the industry. It’s absolutely mind-boggling where technology is going to be able to lead us.