
Informatica World 2025’s “Get Ready, Get Set, Go AI” theme reflects how enterprises are accelerating AI initiatives even though their data management practices might be lagging in terms of quality, integration, tools, metadata, and catalogs. Informatica seeks to stay ahead of them.
Last year’s Informatica World’s main message was that although attendees and the enterprises they represent are ready for AI, their data is not. The agenda was packed with advice, customer examples, and product features for how to get data ready to mitigate risks, evade poor results, leverage good data hygiene practices from other areas of data management and analytics such as metadata management, closely managing data pipelines, paying attention to security and governance, etc. Advanced sessions discussed the emerging use cases for generative AI and the integration of real-time data at scale for Retrieval-Augmented Generation (RAG).
In only a year, Informatica and their customers have literally leap-frogged forward, jumping over steps that, while still necessary for quality AI results, can be abridged or addressed less thoroughly before projects move forward. At this year’s conference, Amit Walia, Informatica CEO, emphasized that by now, companies are exploring every type of AI, simultaneously working on machine learning, generative AI, and agentic AI. Note the purposely vague “working on” to reflect that these AI initiatives are at varying stages, from experimentation to piloting and deployment. While an organization is focused on training an LLM on their own data and internal adoption of ChatGPT-like capabilities, it probably has engineers designing AI agents to support or automate workflows at the same time.
Peer Advice on AI Readiness
In 2024, Informatica’s experts led the conversation around getting data ready for AI. In 2025, there were customers who could describe the process and advise their peers. During the first keynote, Informatica’s Amy McNee, SVP Global Solutions Architecture, established the context for a panel discussion on data readiness for AI with a revealing statistic: 92% of those surveyed by Informatica are accelerating AI even as they realize that they’re not ready. Kathy Chou, SVP Saas Engineering, Nutanix; Edmond Tarée, VP, Rabobank; and Sachin Sontakke, Head of Data, Analytics, Data, R&D, PCM, TechOps for Gilead, responded with the steps they took to prepare their data for their leading AI initiative.
Their data-readiness paths had a common starting point: a defined project goal, which prevented them from taking on the impossible task of making all their enterprise AI-ready. Kathy Chou described how Nutanix, for example, realized that their data was very fragmented, so they elected to focus on their customer data in Salesforce. After a lot of data cleaning and implementing practices for keeping it clean, they had confidence in their customer master database of record to use it for 2026 planning. The next phase of the initiative is to use agentic AI in Salesforce with a goal of reducing a 3-month process to about 2 weeks.
Edmond Tarée advised that companies take a “but first” approach to selecting the priority AI project: Let business priorities decide. He reminded us that “Data is not a technical thing; it represents your business.” Technical choices are secondary. The overarching objective is to do what is necessary to have the business user trust the data. Data management processes are the foundation of that trust, whether the data is used for AI or not.
Gilead’s Sachin Sontakke elaborated on the pitfalls of leading with technology. “For business, tech doesn’t matter–what you do with tech matters more.” For example, GenAI can serve a range of use cases and can take different technical approaches. His advice: “Focus on the solution and use the right technology.” What are the implications for organizations that are caught between realizing ROI by choosing a project that advances the business and out-innovating the competition? Much has been reported on companies finding it hard to get to ROI on AI or churning through failed pilots. Developing AI programs that are grounded in the right business indicators and don’t allow out-of-scope distractions is good guidance.
The conference had many opportunities to hear from customers, and not just from early adopters who could speak to Informatica’s newest capabilities. There were strategy and implementation talks by customers ranging from ETL and data integration use cases to metadata management and catalog implementations representing best practices and providing technical detail to support peers at the same stage. By providing content for leveraging established and recent products, Informatica showed an understanding of how enterprises adopt technology, including the fact that some are early adopters in one area yet are still working on getting the basics right in another. The interesting thing about artificial intelligence is that it requires all layers of the technology stack to have at least the basics down so that it can withstand the challenges of AI.
See also: Why Training LLMs on Company-Owned Data Is the Key to Scaling AI
Innovating at Break-neck Speed
The huge role time plays in understanding the artificial intelligence trajectory was evident at Informatica World 2025, as in many conferences this year. Last year’s new shiny object was barely mentioned this year. Curiosity about RAG has given way to an appetite for agentic AI. Already, large software providers are selling product-specific agents. Development and data platforms (including Informatica) are offering drag-and-drop capabilities for creating and provisioning AI agents. Informatica, as a universal data integrator, has always played the role of Switzerland, looked around and saw that while enterprise makers were building a handful of agents for their workflows, there were many disconnected efforts happening in parallel in a single organization or for a single application. Add running off-the-shelf agents, often on the same data, and you can see the potential for unmanageable complexity and business risk. Think of conflicting data-access policies and undocumented (unauditable) movement or processing.
Between May 2024 and now, Informatica has developed an AI Engineering Service for creating AI agents and managing a complex, heterogeneous AI agent landscape. The AI agent environment includes the capabilities of the Informatica Data Cloud Management (IDCM) product, like governance and meta-data management. The agent development process integrates CLAIRE Co-Pilot (CLAIRE is Informatica’s AI engine launched in 2017. CLAIRE Co-Pilot was previewed in 2024.) so that developers and data engineers can use natural language to build purposed agents. Many of IDCM’s internal processes have been exposed as pre-built agents that developers can use as-is or as a starting point for customizing agents (again using CLAIRE Co-Pilot). Some of the CLAIRE AI Agents that will be in preview in the Fall are Data Quality Agent, Data Discovery Agent, Data Exploration Agent, and a Data Lineage Agent. While some hope an agent can execute a workflow, the reality is that the best results come from combining agents into a workflow, likely because each component of a workflow might have different requirements.
Salesforce is one of the leaders in offering AI agents and having its customers purchase them, so they are likely to be the source of heterogeneity in an agent landscape. While AI agents that specialize in one application’s domain are very powerful, that is not how businesses use their data to operate or for insights. I immediately thought that Informatica’s ability to bridge between agentic AI domains and allow users to integrate Salesforce agents with their own was a main consideration in the Salesforce acquisition of Informatica. However, press releases highlight the importance of Informatica’s meta-data management and catalog capabilities. While they are indeed critical to giving users more power to maintain and access quality data, they solidify a data management foundation rather than point to the next wave of AI adoption. However, the focus on metadata does not preclude offering a more comprehensive agentic AI solution.
See also: MCP: Enabling the Next Phase of Enterprise AI
Pervasive AI
Given the rate of AI innovation, the utopian idea of pervasive artificial intelligence might not be as far off as we think (or fear). There are many examples of technology builders integrating AI capabilities wherever they can or wherever it makes sense. Having at least one AI component is now tablestakes on any new product’s datasheet, and many older products are being relaunched with new AI enablement.
Informatica’s CLAIRE is a good example of an early implementation of an AI engine that started as an internal component benefiting users by speeding certain operations or improving outcomes. Now, CLAIRE and CLAIRE GPT are layered into every aspect of the IDCM and are not just enablers but exposed as tools for users, such as through the AI Engineering Service.
If a software provider’s AI experience foreshadows what enterprises will do with artificial intelligence, then we can anticipate that they will find ways to fold artificial intelligence into every aspect of their business. How long will that take? Next summer’s technology conferences will be a good indicator.