Sponsored by Sumo Logic
Visit Now

Learning to Trust in the Black Box with Observability

PinIt

If your organization spends more time scratching its collective head about data quality or operational issues than it does asking unique and interesting questions, then you’re not just wasting time, you’re actively eroding your company’s future.

The problem with systems and applications isn’t just how much data your organization manages today or will likely manage in the future. It’s not even in the stability or resiliency of the systems and applications that create, curate, and manage all this data. Observability can help.

Why? The problem is with data and knowing the answer to a deceptively simple question: Can you trust it?  For many organizations, the answer is a resounding no. According to a study by HFS Research, “75 percent of business executives do not have a high level of trust in their data, and 70 percent do not consider their data architecture to be world class.” And the problem with this lack of trust is that if your people don’t trust your data, then they won’t trust any of the insights that it creates about their business. Such problems have an impact on everything from reliability of results to application performance.

Download Infographic Now: The growing role of observability

Whether your organization manages a few hundred gigabytes of data or multiple petabytes, you need to start building bundles of systems, tools, and philosophies that will help your people understand, manage, and improve the health of your data. One of those bundles is the growing world of data observability.

See Also: Continuous Intelligence Insights

The state of data observability

In an interview with our editor-in-chief Salvatore Salamone, Rohit Choudhary, the founder & CEO at Acceldata.io, described data observability as an approach that “helps modern data enterprises deal with the complexity that so many different data systems bring into the enterprise. It gives them the ability to control the quality, reliability, and performance of their overall data and data systems.”

Evgeny Shulman, Co-Founder and CTO of Databand.ai, says: “Data observability goes deeper than monitoring by adding more context to system metrics, providing a deeper view of system operations, and indicating whether engineers need to step in and apply a fix… observability tells you that its current state is associated with critical failures, and you need to intervene.”

According to Choudhary, the data observability world is a direct descendant of the rapid deployment of IT applications between 2000 and 2010, followed by a wave of more data-intensive applications between 2010 and 2015. Now that many applications have been collecting data for a decade or more, organizations need new operational and analytical technicals to understand the vastness of what they’ve already gathered—and prepare their pipelines for what’s next.

The industry around data observability is snowballing, too. There are already established players, like Monte Carlo and Bigeye, with a bevy of new startups like Cribl, Acceldata, Databand, Datafold, and Soda, some of which also have or support open source tools. They’re each trying to address the “black box” feeling of complex data pipelines and architectures capable of moving data but not capable of being monitored.

Download Infographic Now: The growing role of observability

Overcoming the cultural and philosophical gotchas

While many organizations could simply get in touch with one of these data observability providers and spin up a new suite of tools, your goal isn’t just to spin-up new tools because everyone else is doing it. In our conversations with observability experts, we’ve uncovered a few misguided assumptions that have gotten other organizations into situations where they know how to look into the black box that is their data pipeline but have no way to “translate” that to the rest of their organization.

  • There is no end goal to data. In the past, data was collected as a prerequisite for a specific question an organization wanted to ask. For example, marketing teams use tools like Google Analytics to understand the demographics of those who visit their website. But now, our capacity to derive new questions from old data is unprecedented, hidden beneath layers of data that need to be correlated together. Your data observability should account for the fact that no dataset is “complete”—it’s just waiting for new questions to deliver new insights.
  • There is no end goal to the quality of your data. Choudhary says that in the past, data quality efforts “used to be like a centralized, once-a-year objective run by the CTO’s office.” That’s now completely changed as the trust in data quality erodes and the speed of analysis increases. Data quality is becoming a real-time concern, the kind of metric that executives might want to see splashed on a monitor in the office or in an easily-accessible dashboard.
  • Bad data is everyone’s problem. And that’s not to lay all the blame on the DataOps or data science teams—it’s a reminder that these days, nearly everyone in an organization is running data-driven analysis, not just the data experts.
  • Downtime is greater than inaccessible data. In observability, we’re used to thinking of downtime as anything that affects customers or end-users, but data downtime is described as any time your data is “partial, erroneous, missing, or otherwise inaccurate.” It might still be accessible, but its state is still actively wearing away that already-fickle trust.
  • APM ≠ data observability. If you thought you could adopt your existing application performance monitoring (APM) solutions to data observability, you’d probably be disappointed. When it comes to data pipelines, you need to know more than whether an application is running or not. If a dataset doesn’t arrive when you expect it to, you need to trace back the typical lineage of that data to understand which step went wrong, which isn’t something that endpoint monitoring tools are capable of.

In the end, the goal should be to let your data engineers shine. If your organization spends more time scratching its collective head about data quality or operational issues than it does asking unique and interesting questions, then you’re not just wasting time—you’re actively eroding your company’s future. A data engineering team that’s free to work on complex-but-valuable problems is one that’s going to drive more value than what your competition might get from similar information. And to get there, they’re first going to build back trust in the data they observe.

Download Infographic Now: The 5 intelligence gaps curbing your climb to digital  success


Joel Hans

About Joel Hans

Joel Hans is a copywriter and technical content creator for open source, B2B, and SaaS companies at Commit Copy, bringing experience in infrastructure monitoring, time-series databases, blockchain, streaming analytics, and more. Find him on Twitter @joelhans.

Leave a Reply

Your email address will not be published. Required fields are marked *