Sponsored by Waylay

Low Code Automation for Cloud-Native Data-Driven Applications

PinIt

Low code automation can help developers from getting bogged down with things that have nothing to do with the problems they are trying to solve.

Low code holds great promise to speed the development of innovative applications. But beyond the basic drag and drop assemble, there are many issues to consider when developing and deploying today’s modern data-intensive applications using the technology.

What is needed is something that complements the benefits of low code by abstracting away problems that can arise when different groups use different tools and applications, which may run in multi-cloud environments. RTInsights recently sat down with Veselin Pizurica, CTO, co-founder, and president of Waylay IO, to discuss these issues and how higher-level automation lets different groups collaborate.

Here is a summary of our conversation.

RTInsights: What are the problems when building modern cloud-native, data-centric applications?

Veselin Pizurica

Pizurica: Obviously, there is more than one problem, but let me start in some order. So, what do we mean by cloud-native these days? About five to ten years ago, we thought about delegating cloud computation to the cloud. Today we are one step further in that journey going beyond abstracting machines, network, and storage. These days, when we talk about cloud computing, we are talking about service computation, such as execution of machine learning models, use of notification services in order to create support tickets, sending emails, SMS’s, etc. Implications of this evolution is that today’s development looks very much different from the past: cloud-native developers are connecting these dots, these pieces of information, and building applications on top of that. In its own right, that builds a lot of complexities. Suddenly, they have to manage small living snippets of code that talk to these services, understand communication patterns between these code snippets, and finally build the logic on top. That means that the architecture that you are dealing with becomes very complicated. That’s the first problem.

The second problem is that major cloud players such as AWS, Azure, or GCP offer a variety of microservices to developers to create cloud applications without a consolidated application development and deployment portal. The analogy I like to make is the following:  imagine inviting friends for dinner. You start a day by going to a grocery store: you take a basket and start picking up tomatoes, carrots, and other vegetables, and then when you are back home, you cook your meal. The same is true about cloud development today. Since the architecture is influenced by the application you are building, everything starts backwards. To come back to the previous analogy, before you know which microservices to shop, you need to ask your visitors what they want for dinner. It turns out, writing cloud applications requires a lot of microservices. That brings yet another topic of complexity: how you deploy and manage that application in the cloud because there are many moving parts. These technical challenges are the reason why there are many companies in the service market. They provide tooling to remedy problems, offering things like traceability and observability, fighting the symptoms rather than providing a remedy.

In Waylay IO, we figured out that most of the automation recipes are more or less the same, so the platform comes with the right set of microservices already deployed. Connected dots was then the next big challenge, so we’ve focused a lot of effort in the past seven years to make sure we get that problem solved. More about it later.

Another category of problems comes not from technical issues, but from fear. Fear of losing control, fear of vendor lock in, fear of weak security, unpredictable cost, etc. Fear of ownership because you are delegating, in a way, your application and business operation to someone else. Many big companies are offloading their workload into the cloud without knowing whether the total cost of ownership of the application will reflect their business goals. They will be charged per volume consumption, and hence their cost might skyrocket while their business is still in the early stages. As the cost is driven by the use of cloud microservices, cloud-native architects have also become responsible for the OPEX cloud cost.

RTInsights: Beyond development, what other challenges are there with respect to today’s data-centric applications?

Pizurica: It’s interesting that you mentioned data-centric because that already implies something. Often that means that people are narrowing the definition of OT/IT convergence challenge to the problem of aggregating different data sets, data pipelines, and ML/offline analytics on top. It turns out, once you finally learned something from data, you need to apply that knowledge – by improving your operations or creating new business models. But how? Connectivity, data collection, and AI/ML modeling is only the beginning of knowledge discovery. Applying that knowledge to your business is the ultimate goal.

That’s another kind of problem. It is related to the fact that the platforms are becoming a place where people meet and collaborate and implement automation use cases. And with that said, there is something called Conway’s law. Conway’s law dictates that applications are built in a way your organization is structured.

If I have developers, subject matter experts, IT folks, or machine learning guys, they’ll be working in isolation using their own tools. So, development and deployment are disconnected. The challenge is not so much about the variety of different sources, it is the variety of different organizations that need to collaborate. What we offer is that one place where this collaboration can happen. Starting from data ingestion to storage to visualization and automation, machine learning experts, IT folks, developers, and DevOps can collaborate and deploy applications jointly.

I will give you one example to illustrate the problem. One of our bigger industrial customers collected and stored a lot of data from thousands of machines. They had machine learning experts looking at this data, building different prediction and anomaly detection models, but the challenge now is how to deploy them in production. Different models might be applicable to different machine types. Also, different problems might require different case handling, some end customers would require integration with Salesforce, some world require integration with ticketing systems or CRM tools, etc.  Suddenly it’s not only a machine learning problem. It becomes a DevOps and IT problem – but these groups are not using the same tools to build applications end to end, so now they are all stuck and frustrated, there is a feeling that nothing is moving. The reason is that so often the automation is seen as an afterthought.

People think of industrial IoT in a linear way, with four stages: connection, collection, storage, and finally analytics – building machine learning models and reports. But the final phase (analytics), should be followed by application of that knowledge via automation. At that point, you realize there is one more thing that you have not thought of, which is automation, and hence, there is still a problem of applying that knowledge back to your business case. At Waylay, we are building the automation platform as the central place where people meet.

RTInsights: Are there additional issues if you need to develop apps for multi-cloud environments?

Pizurica: The first question is, is this necessary? The big three [cloud providers] will tell you multi-cloud is not necessary. The reason being is that they provide data centers in different countries to solve the problem of regulatory requirements. The world is not as global as we like it to be. In different countries, for regulatory reasons, you must deploy your application in their cloud. In Asia, that might be Alibaba, or you might deploy it onsite in a private data center. You need to have either a multi-cloud strategy, or you will have to reimplement the same application for different regions.

Another aspect is, as I mentioned, one of the fears is vendor lock-in. Even though most cloud providers have very similar offerings, they are different enough so that you cannot replicate the same use case and create more or less the same application by replacing one cloud provider with the other one (using its microservices).

The big three have done a great job of building a robust and very scalable set of microservices. Therefore, only a few companies have a stomach, so to speak, to replicate that microservice and automation framework that is truly multi-cloud. So, only a few companies have the stamina or even, I would just say, courage to try to achieve that.

 RTInsights: Why not just use open-source solutions?

Pizurica: We also use open source. We are not trying to reinvent too many wheels. However, there is a difference between us saying that and the end customers.

Why is that? Every customer can try to do the same thing as we’ve done, use the open-source tools, and then bring the automation layer on top of that. And companies are doing that. But it is an enormous undertaking. We have been in business for seven years. We know how these open-source components should be orchestrated together.

Another point to consider is that open source doesn’t mean free. We are also very well aware that some of these open source communities, not all of them, require sponsorship and support, and we are adhering to that. We are also a big supporter of OpenFaaS, one of the microservices we use as an underlying technology.

RTInsights: How do low code solutions help?

Pizurica: Low-code development promises that developers can use a platform to code at a very fast pace, with minimum setup effort and quick deployment. In theory, it should reduce the complexity of the application development process.

Now, if you then ask yourself, what is that low-code, that little Lego block? You can think of that as a small snippet of code that is very useful across multiple use cases. In the Lego analogy, I will create that block once and then use it over and over again to create a tower. And then, I might create a bridge out of that same block. I would still use the same snippet of code. That, to me, is the promise.

Coming back to your first question about the microservice and complexity…It is one thing to create Lego blocks and another to have a framework that easily lets you recycle or use these components in different use cases. That requires a very sophisticated orchestration platform.

So let me try to explain in another way.  There is a term used in computability theory called Turing completeness. If somebody says “My new thing is Turing complete” that means that in principle, it could be used to solve any computational problem. Software languages are Turing Complete.  We mentioned earlier that the basic building block of the Low Code platform should be a small snippet of code, which like a Lego brick, is reusable across different use cases. When serverless hit the mainstream, it was widely accepted that serverless was the best candidate for that low code Lego brick approach. And that brings me to the story of Turing Complete automation. If we are to use code snippets to implement our logic, we need an extremely powerful rules engine that can orchestrate these code snippets, without needing to resolve back to coding all that in the software language, otherwise what’s the point.

You have many low-code platforms, and they are very useful in solving one particular problem (market automation, workflow orders, etc.). For instance, if I use Salesforce, I would like to enable people who are not good coders or who are not coders at all to use drag and drop interface to create a use case – that’s also a low-code. But you cannot use the same tool for any other use case than within the Salesforce ecosystem, it is not meant for instance as a tool for market automation.

At Waylay, we have created almost, if not Turing complete automation technology, which can orchestrate these snippets of code. That means you don’t have to code lower layers. We want to liberate developers from getting bogged down into things that have nothing to do with the problems they ought to solve. The Waylay platform is a pre-made automation stack where API gateways, multi-tenancy, lambdas, databases, and all other services are embedded and pre-installed. Developers have nothing to set up, nothing to manage.

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply