Fast-track M&A Synergies Through AI-Grade Data Fitness

PinIt

M&A is often viewed as a time for companies to change, but that energy should be extended to their data environments, as well. It’s an opportunity to commit to a set of data-quality, governance, and management practices that will lay the foundations for the modern, successful AI-driven business.

While M&A has seen a slower-than-expected start this year, Goldman Sachs recently reported that activity is picking up. However, with three-quarters of deals historically failing to deliver the anticipated synergies and data trends becoming increasingly complex every day, businesses will face tremendous pressure to execute integrations without compromising deal ROI.

Unlike the past few years, these deals will happen against a rapidly evolving backdrop of Agentic AI and GenAI. In normal times, the success of AI and GenAI depends on feeding machine systems with gigabytes of high-quality data. It’s been shown that the majority of projects today are stuck in the starting blocks, mainly because of a lack of the “right” data.

Combining companies’ operations through mergers or takeovers will only amplify this problem, with technology chiefs under pressure to process new data types and unlock fresh sources of information while inheriting a mix of unintegrated data silos, mismatched data naming and storage conventions, and misaligned data policies.

Overcoming these challenges while hitting ambitious business targets and satisfying new regulatory commitments means getting the combined operations’ data infrastructure match-fit for both AI and overall core operations. The question is how to achieve that without an IT investment that can dent a deal’s ROI.

Managing the “urge to merge”

Business and finance have traditionally been the engine of M&A, with technology and integration taking a back seat – to the detriment of many deals.

According to 2024’s The M&A Failure Trap, 75 percent of deals studied in the past 40 years failed to deliver on sales growth, achieve anticipated cost savings, or realize share price objectives. A key reason was the overriding business pressure to complete the deal—something authors Baruch Lev and Feng Gu call the “urge to merge.”

This should set alarm bells ringing in a world where technology is integral to business and AI is seen as a growth opportunity.

However, with 80 percent of AI projects considered, what lessons can be learned from them?

It helps to look at the scenario we’re seeing play out on AI projects:

  • Systems that don’t deliver the anticipated outcomes. Models have produced inaccurate, misleading, generalized, or biased results. They fail to spot patterns and identify trends that could help the business and improve customer engagement. Some systems have even made decisions that have lost companies money and damaged reputations.
  • AI systems that have exposed their owners to risk. Data that contains protected information – such as individuals’ personal details – or is owned by others has been used to train systems, potentially leading to compliance breaches and opening the door to cyber threats.

The common factor is the use of unknown, sensitive, or poor-quality data. Much of the data flowing through companies is “gray” – it’s dirty, lacking consistency, and context. In other words, it’s out-of-date, inaccurate, incomplete, or compromised. Cleaning, introducing consistency, and establishing a working understanding of that data—the prerequisites to establishing quality and trust on a systematic basis for AI—take significant resources. Further, companies are struggling to feed AI systems with data at either the volume or speed necessary.

Put simply by RAND: many organizations lack the data and the infrastructure to manage AI projects. Addressing this for combined operations should be a strategic consideration of M&A.

See also: The Imperative of Data Integration in Mergers and Acquisitions: A Strategic Blueprint

Out of the starting blocks

Getting AI to break out of successful pockets and operate on a large scale means creating an environment where trust and confidence in data are a given and where it’s possible to manage and process data reliably at scale. It means achieving that without falling back on the kind of IT-driven approach that introduces costs, complexity, and bottlenecks in delivery.

There’s a three-step approach to achieving this.

1) First, create a system that makes it possible to find, record, and classify data regardless of its location or type using metadata – whether it’s structured or unstructured, streaming or static. Establish this using tools and techniques to describe the data accurately, making data accessible for teams and turning it into a reusable business resource. Building the metadata-driven semantic layer right will speed and streamline the process of finding and working with data, regardless of the scale of the computing environment. Augment the approach to leverage the hub and spoke model to gather metadata from newer tech stacks (instead of building it anew), brought by different acquisitions or adopted as the organizations grow together.

2) To ensure the new environment is business-ready, simplify the experience for both consumers, suppliers, and the data governance owners. For consumers, provide business vocabulary-driven standard terminology, measures, and metrics to simplify finding the right data. For suppliers, provide a no-code/low-code environment to maintain data freshness and quality, which is continuously monitored. For data governance, make sure the ability to apply policies and standards from the life cycle, access, and regulatory requirements is built in and automated to the highest extent possible.

3) The final step is to deliver data with both a business user-friendly experience and governance guardrails. This includes the ability to deliver data to the destination when allowed under the governance policies, without the need for complex ETL/ELT job creation dependent upon skilled data engineers. Gates are in place to ensure data is masked when needed, refreshed at needed intervals, and that it is easy to add new data as more diverse technologies are added.

See also: How to Use APIs to Help Navigate Corporate Mergers

A last word on M&A and data fitness

M&A is often viewed as a time for companies to change, but that energy can—and should—be extended to their data environments as well. It’s an opportunity to commit to a set of data-quality, governance, and management practices that will lay the foundations for the modern, successful AI-driven business envisaged by the deal architects.

Kunju Kashalikar

About Kunju Kashalikar

Kunju Kashalikar is the Vice President of Product at Pentaho.

Leave a Reply

Your email address will not be published. Required fields are marked *