There is a distinct flow in the DevOps pipeline that lends itself to automation.
If you step back and take the long view of DevOps, what you see is an effective approach to digital transformation for enterprises that develop their own custom applications. That used to mean larger enterprises but now with infrastructure-as-a-service and platform-as-a-service, access to tools for building applications and infrastructure to run these applications is just a few clicks away. So enterprises of any size can now build custom applications, become more effective at engaging customers, and leverage DevOps to make sure the process is ultra-efficient.
From an architecture and design standpoint, DevOps means agile methodologies, small multidisciplinary project teams, executive sponsorship to help get beyond sticky issues in a business culture, and a focus on microservices, pipelining, and optimizing application behavior. DevOps starts with people and process.
A DevOps Model
DevOps can be segmented into at least 13 constructs, so before you begin thinking about where you can find tactical wins, it would be best to think ahead. Identify your portfolio of needs and approach change in a structured way that minimizes disruption and risk.
While there are many aspects of DevOps that are tied to methodology, organization, architecture, and design – the current thinking in DevOps is straight-through processing (STP) across the lifecycle of the application. This is driving DevOps into the real-time realm. DevOps thinking is application-first or application-centric. This means that DevOps is about both pipeline activities (moving from design through development to deployment) and application management (managing the production application against its goals and objectives).
The opportunity, as well as the challenge, is that DevOps is largely a collection of iterative tasks when it comes to both the pipeline and application management. Continuity is provided by a reengineering feedback loop as shown below:
However, there is a distinct flow in the pipeline that lends itself to automation. Activities like builds, integrations, tests, and delivery are frequently scripted to address automation within an activity. Decision criteria (rules) can be used as gates to control next best actions which can provide a way to integrate between activities as well as help address overall governance requirements.
However, there is enough fragmentation within the DevOps market to render the notion of a heterogeneous, abstracted, and integrated pipeline solution as still a work-in-progress. But the good news is that vendors are actively working on this issue.
Real-Time Application Management
Adding application management into the mix creates a new and different set of opportunities and challenges. Application monitoring and management are key activities for production applications. This puts a premium on real-time continuous data collection in order to provide a foundation for application monitoring and management.
This is part of the agenda for application and network performance management tools (APM/NPM), but now extends to all applications and infrastructure. There are now vendors that specialize in collecting log data and using analytics as a way to either proactively spot problems before they happen, recommend potential solutions, and effectively support root cause analysis of the problem. While most of these activities are centered on long-running applications, these techniques can also be used to dynamically address the performance of scheduled applications.
When it comes to modern application development and deployment, microservices, containerization, and immutable infrastructure rule. The dynamic scalability that exists for virtual machines and containers is a lightweight form of optimization that functions in real time by spinning resources up or down to match workload demands. It therefore won’t take too long for more classic optimization techniques to get layered on that rely on defined objectives (cost or performance would be typical) to optimize application behavior across multi-cloud environments.
Don’t Block the Process
We live in a very exciting time when it comes to IT. Real-time dimensionality is popping up everywhere – especially in DevOps. The goal is to allow pipeline and application management activities to proceed continuously in a non-blocking way. In this way, applications are never waiting for resources and resources are never waiting for applications.
This means that all next-best actions happen immediately and rules that drive latency only exist when it is to the advantage of the enterprise to have them. A reengineering activity provides a way to close the loop in the event that exception handling is required to deal with situations where a gate could not be passed or an SLA achieved.
For more information on this topic, see my video blog: The DevOps Reference Model: the First Three Rules.