We’re in the midst of a monitoring revolution, which will probably continue to play out over the next decade as newer and better tools and methodologies emerge.
Once upon a time, monitoring was a pretty boring part of IT operations. You collected data, received alerts when metrics crossed a certain preset (and usually statically-defined) threshold, then took action.
Over the past several years, however, several new trends have transformed the landscape surrounding monitoring. Let’s take a look at four such trends and what they mean for monitoring strategies and tools today.
From monitoring to observability
The biggest change is the shift from mere monitoring to observability. In many ways, this is a conceptual and cultural change rather than a technical one, although it also involves new types of tools.
There is some ambiguity regarding what, exactly, differentiates monitoring from observability. But in general, you could sum it up by saying that monitoring focuses on finding out what is happening with a system, whereas observability focuses on the why.
When you monitor, you collect data and generate alerts. When you observe, you analyze data to understand why alerts happen. Most folks would tell you that monitoring is one component of observability, but that observability goes much further.
For IT teams, collecting and interpreting data is only half the battle. Taking action in response to problems is the other, equally critical challenge.
AIOps helps teams respond more effectively by automating some of the workflows required both to evaluate and remediate problems. AIOps has its limitations – it can’t automatically solve every problem under the sun – but as a complement to monitoring, it can assess and fix many types of issues more quickly than humans could hope to.
Ancillary data sources
In the past, monitoring routines focused on pretty simple and unimaginative types of data. Mostly, it was metrics like CPU and memory consumption.
That has changed today. Not only has observability brought to the fore logs and traces – which, along with metrics, comprise the so-called pillars of observability, but teams are now also tracking metrics from systems and processes that they may not think of as IT resources in the conventional sense. For example, they may track metrics from their continuous delivery pipelines to measure the performance of their software delivery operations or use data about help desk response times and ticket types to contextualize data from technical systems.
The goal is to gain broader, more holistic insight into systems, as well as to align IT outcomes more closely with business outcomes. When you can understand not just what is happening in a technical sense (which you do with logs, traces, and metrics) but also in a business sense (which you do with data like how many support tickets your customers submit or how often an application update fails), you take technical data out of a silo and use it to drive what ultimately matters most – business success.
Standardized data telemetry
Getting telemetry data (meaning logs, traces, and metrics) into monitoring software can be a pain if every monitoring tool collects it in a different way, and each system you want to monitor has to be tailored for that bespoke process.
That’s why projects like OpenTelemetry are promoting open, community-defined tools for collecting and exporting data. When you use monitoring tools that support OpenTelemetry, and you configure your systems to support the OpenTelemetry standards, you can easily switch between monitoring tools — or use multiple tools at once — without having to re-instrument your systems for each specific tool.
A monitoring transfomation
In just a few years, the monitoring landscape has shifted from one defined by relatively simplistic metrics-collection routines to one where teams take a more holistic approach to understanding what is happening within the complex systems they need to manage. The standardization of data collection has helped, as has the automation enabled by AIOps.
And what we’re seeing today is likely just the beginning. We’re in the midst of a monitoring revolution, which will probably continue to play out over the next decade as newer and better tools and methodologies emerge.