The Role of Data Observability When Data Is Strategic


Going further than the data quality ecosystem, data observability is becoming a crucial component of data landscapes that accelerates data issues resolution, facilitates communication and collaboration between data practitioners, reinforces data stacks, and increases organizations’ competitiveness.

Over the last decades, we have seen companies switching from strategies relying on data to data becoming the core strategy. Companies that were “data-driven” now aim to evolve into data companies, where data is seen as an asset that can directly generate revenue. To achieve this, these organizations must undergo several transformations and strengthen their data operations to scale the value generated by data. Increasingly, data observability plays a key role.

Scaling data teams

To support the scaling of data operations, organizations are building up data teams where specialized roles have emerged. This transformation is comparable to how IT teams evolved in the 50s when companies created dedicated positions to maximize value creation from computing. Hence, we now find similar segmentations in modern data teams. Data engineers focus on identifying, extracting, and transforming data. Scientists design and maintain data models that provide business recommendations. Analysts build reports to generate insights and visibility for business stakeholders, etc.

Consequently, roles such as data engineer are one or several steps away from direct communication with final users and lose the connection with their requirements. At the same time, data scientists and analysts are distant from the data sources and lose sight of some technical aspects of collecting and transforming data. Over time, while projects pile up and team sizes grow, silos start to appear, which causes less end-to-end visibility for the different stakeholders, spreads the knowledge, and scatters responsibility.

See also: Data Observability: A Modern Solution For High Data Volume

The increasing number of data issues

Scaling the value generation from data also increased the number of data issues these teams have to deal with for varying reasons. For instance, human errors, which can lead to deleting columns that make data incomplete, now have a much more significant impact, and their propagation is harder to control. Also, regulatory changes (e.g., biometry information can’t anymore be recorded at the CRM level) may require modifications in how data is collected and processed, which becomes an important source of unanticipated problems.

In other cases, a specific business case (e.g., redefinition of the customer categories) might require changing the configuration of the data. While this change might seem irrelevant initially, its impact on other reports and models that rely on the same data source will be more significant in a growing environment.

The combination of siloed data teams and a rising number of data issues has catastrophic consequences for organizations. While team members struggle and waste their time to understand where the problems are coming from, who is responsible for them, and how they fix them, the business stakeholders make wrong decisions, the consumer experience is impacted, and the organization loses revenue.

The downfalls of data quality solutions

However, from decades of using data quality solutions, it is known that they are providing a  partial solution to this challenge. Data quality solutions are designed for scanning data at scheduled intervals and indicating if it meets users’ requirements at some stages of the data value chain, but they do not provide the contextual insights needed to understand where the data issues come from and how the data team could handle them in a timely manner. So whenever a problem is detected, data teams are left to find when, where, and how the issue happened, representing days of work finding the root cause and troubleshooting the problem.

In the long term, frustration arises within data teams, and their productivity decreases. At the same time, business stakeholders, impacted by the long time required to resolve data issues, lose trust in the data’s reliability, and the ROI of data quality solutions sunk, given the number of resources needed to set it up and maintain them.

See also: Making the Case for a Small Data Observability Strategy

3 ways data observability solves data issues

Data observability provides data teams with insights into where the issues are coming from and who is responsible for them. Data observability has three main characteristics.

  1. Real-time data analyses, so there is no lag between monitoring and usage. Data teams can identify issues as they are happening, reducing the time to detect and avoiding data users facing them before the producers even know they exist. 
  2. Contextual information about data issues (e.g., application, owner) to accelerate resolution time.
  3. Continuous data validation as data observability is part of the development lifecycle until production. This aspect perceptibly improves the trustworthiness of the applications and prevents data incidents, decreasing the total cost of ownership.

In addition to facilitating data issues management across the data landscape, data observability also improves communication within data teams. It provides information that simplifies exchanges between data producers and users who can clearly understand where the data is coming from and its usage down the data value chain. It also facilitates the definition of SLAs at a granular level, breaking silos and reinforcing a culture of accountability where roles and responsibilities are clearly defined.

Automatically updates data catalog

In parallel, data observability has the ability to provide insights that complement data catalog capabilities. Especially in complex data environments, the adoption and maintenance of the data catalog are one of the main challenges for the data management department and CDOs. The information automatically collected in a Data Observability platform is continuously synchronized with the data catalog, lowering its maintenance costs and improving its accuracy, so users are ensured their decisions are made upon reliable data.

In today’s data management landscape, data observability is critical for companies setting data at the core of their strategy. Going further than the data quality ecosystem, it is becoming a crucial component of data landscapes that accelerates data issues resolution, facilitates communication and collaboration between data practitioners, reinforces data stacks, and increases organizations’ competitiveness.

Andy Petrella

About Andy Petrella

Andy Petrella is the author of the first O’Reilly book about Data Observability: “Fundamentals of Data Observability,” and CPO and founder of Kensu, a data observability solution provider that helps data teams trust what they deliver and create more value from data.

Leave a Reply

Your email address will not be published. Required fields are marked *