The AI challenges organizations are wrestling with span everything from the integrity of the data being employed to drive AI models to a lack of skills.
IBM this week confronted all the hurdles inhibiting adopting of artificial intelligence at a Data and AI Forum event. Industry analysts such as Gartner estimate that despite all the promise of AI, only 14% of organizations have deployed AI in a production environment. That’s up from only 4% a year ago. The AI challenges organizations are wrestling with span everything from the integrity of the data being employed to drive AI models to a lack of skills.
Rob Thomas, general manager for data and AI at IBM, told conference attendees data, skills and lack of trust in AI models are the three biggest inhibitors of AI adoption today.
However, Thomas concedes there are other factors in play, not the least of which is business leaders don’t know what data is needed to drive an AI model. Even when they do know what data is required, all too often it’s inaccessible because it’s been locked up in one silo or another, adds Thomas.
Finally, Thomas says that organizations that have not embraced open source software to build custom applications are not likely to succeed in an AI space that is dominated by open source tooling.
Of course, AI projects also require significant budget allocations and many business leaders are still downright fearful of potential AI outcomes. Those fears involve everything from concerns over how customer experiences might be adversely impacted should an AI model sub-optimally automate a process at scale to loss to the inevitable internal politics that will arise when people and processes are made redundant by AI.
The issue business and IT leaders need to come to terms with is that while it’s unlikely AI will replace people any time soon, people that know how to wield AI will replace people that don’t, says Thomas.
As a result of all these issues, AI has yet to become a high enough priority for many organizations, says Thomas. In fact, Thomas says concerns about lack of budget dollars being made available for AI projects are simply a reflection of business leaders deciding to prioritize other initiatives, notes Thomas.
Despite all those issues, IBM continues to plow ahead with its Watson Anywhere initiative. Updates to that effort announced this week include a new Drift Detection capability added to the IBM OpenScale AI governance platform that detects whether an AI model drifts from its original parameters.
IBM is also moving to address data management issues by updating Cloud Pak for Data, a suite of data virtualization and management tools based on a microservices architecture that makes them simpler to deploy on-premises or in a public cloud. Capabilities that n come standard with Cloud Pak for Data include a Db2 Event Store capable of storing and analyzing more than 250 billion events per day in real-time, and Watson Machine Learning, equipped with AutoAI, a tool that leverages AI to automate the process of building an AI model.
Thomas claims IBM is the only vendor to have applied data virtualization capabilities at scale to drive AI applications. That approach eliminates the need to federate data across multiple repositories in a way that is 500 times faster, says Thomas.
“Data virtualization is the secret sauce,” says Thomas.
In fact, Thomas notes that without a solid information architecture in place AI simply isn’t possible to achieve.
For AI projects to ultimately succeed, Thomas says CEOs will need to force the AI issue by taking a stand within their organizations. Less clear right now, however, is to what degree CEOs understand exactly what it is they are being asked to drive from the top down once it becomes clear a bottom-up approach to AI adoption isn’t working.