A data management strategy that ensures data quality and governance will empower a business environment that can successfully achieve and even surpass business goals – from improving customer and employee experiences to increasing revenue and everything in between.
The old adage that a chain is only as strong as its weakest link applies to the effectiveness of a business strategy as it relates to data quality. A business strategy, in other words, is easily derailed by failing to give data quality its due. A good data management strategy can help.
Why? By ensuring data quality, a business guarantees that data is fit for its intended purpose and trusted by users in the context of existing business operations, analytics, and emerging business scenarios. In the case of customer data, a data-driven strategy that amplifies analytics and produces deeper customer insights will result in the type of personalized and highly relevant customer experiences that drive revenue and increase customer loyalty and lifetime value.
Conversely, an approach that skimps on data quality will likely introduce friction into a typical customer journey by engaging a customer with an irrelevant offer, message, or content or by failing to move with the customer in the cadence of a customer journey. Friction has consequences. For example, research from Experian found that an average company loses 12% of its revenue to poor data quality, not including millions in sunk costs due to overruns in marketing costs for unnecessary targets.
A Perfected Data Roadmap
Having data quality at the heart of a business strategy encompasses several components; identity resolution, aggregations, enhancements, persistent keys, scale, and latency issues are paramount to ensure data is fit for purpose and that it provides both marketers and business users with a single, accurate source of truth about a customer.
With a single source of truth, every user has access to the same customer record, an updated, accurate profile created from connecting all types and sources of data – batch, streaming, structured or unstructured – at a high speed and scale. An effective data management technology will have a high-performing data integration capability combining hundreds of data sources and billions of rows of data in a single project, helping to reduce complexity, so integration efforts are easier to maintain. Universal access in a single point of operational control, combined with a real-time capability and machine learning models, ensures businesses have a clear roadmap for optimizing business goals.
The roadmap begins with data quality. Identity resolution is the process of analyzing, deduplicating, and relating customer records in a precise way to build out accurate, updated, and unified customer profiles, also known as a Customer 360 or a Golden Record. A Golden Record that is accessible to all business users is the key to engaging with an individual with contextual relevance across channels and interaction touchpoints.
Identity resolution is an ongoing process that continually enriches customer data with online and offline transactional and behavioral data, as well as other demographic overlays. It provides users with confidence they are engaging with the right individual, household, or other entity as the target moves through a customer journey. Done correctly, identity resolution finds, cleanses, matches, merges, and relates every disparate signal about a customer to produce the accurate, complete, and up-to-date view of the customer needed to deliver a personalized customer experience.
Identity resolution produces a full contact graph of an individual or entity that includes all known IDs, devices, addresses, emails, nicknames, etc. The ensuing single customer view, aka Golden Record, combines a contact graph with all transactions, aggregates, behavioral and preference data to provide users with everything there is to know about a customer.
Consistency and Persistent Keys
Persistent key management is an integral data quality component that provides a contextual understanding of a customer, and a customer’s journey, over time. Persistent keys attach identifiers across multiple data sources from various signals to an existing unique master record. Probabilistic matching as a subset of identity resolution depends on persistent key management; if a new unique ID were to be created every time a new data element was introduced or with every operational update such as a nightly batch processing, there would be no way to reconcile various signals across a multitude of data sources and data fields.
Advanced identity resolution using persistent keys is what enables data to be sourced from every conceivable source and then reconciled to a master record. Ultimately, persistent key management allows business users to maintain a consistent view of the customer as the customer throughout various life stages, accounting for marriage, separations, relocations, the birth of children, job changes, new email, devices, etc.
Data Quality at Ingestion
Another important factor in making sure that enterprise data is fit for business purpose is to complete data hygiene and data transformation tasks at the point raw data is ingested – confirming, for example, that even if two pieces of data are identically labeled, they indeed mean the same thing. Perfecting first-party data that resides within an organization should be completed prior to any subsequent data enrichment steps.
Data quality, in other words, should never be outsourced to a third party. Yet it’s a common approach from many companies that have yet to recognize the tremendous value of the first-party data that resides within their own organizations. They instead rely on third-party reference files for data enrichment. The issue, of course, is that matching customer data to a reference file that may be days, weeks, or months old will produce a record that is likely incomplete, inaccurate, or outdated and thus impossible to trust for delivering a personalized, relevant experience that is in the precise cadence of a customer journey. Cleansing, normalizing, and standardizing raw data to make sure it is in a form and quality to meet business needs prior to subsequent data enrichment solves the problem of introducing inaccurate records into the downstream matching process.
A Data Governance Framework
Data governance should also not be overlooked as an important component of data management and data quality. Sometimes used interchangeably, there are important differences. If data quality, as we’ve seen, is about making sure that all data owned by an organization is complete, accurate, and ready for business use, data governance, by contrast, is about creating the framework and rules by which an organization will use the data.
The main purpose of data governance is to ensure the necessary data informs crucial business functions. It is a continuous process of assessing, often through a data steward, whether data that has been cleansed, matched, merged, and made ready for business use is truly fit for its intended purpose. Data governance rests on a steady supply of high-quality data, with frameworks for security, privacy, permissions, access, and other operational concerns.
A data management strategy that encompasses the elements described above with respect to data quality will empower a business environment that can successfully achieve and even surpass business goals – from improving customer and employee experiences to increasing revenue and everything in between.