Obstacles to enterprise AI adoption include lack of internal skills and poor communications between data scientists, business staff, and AI experts.
Enterprise AI adoption obstacles can be daunting. According to an IBM survey, while 95% of companies surveyed “believe AI is the key to competitive advantage,” only 5% have “extensively implemented AI.” This large disconnect between the desire to stay on the cutting edge, and the actual adoption of AI technologies, necessitates a different approach to integrating AI into different enterprises.
The obstacles standing in the way of more widespread adoption of AI, and how to overcome them, was the topic of a talk at the recent virtual ODSC East conference by Mark Weber, Applied Research Scientist at the MIT-IBM Watson AI Lab.
The first, perhaps most obvious, obstacle is the need for data science and AI talent. And, as Weber points out, it’s not just about finding individuals with the right skills but building a team with the right “diversity of skills” to make a company’s AI efforts a success.
Once a company has recruited one or more highly-skilled data science teams, what happens next? One obstacle, which was discussed in another talk at ODSC East by Daniel Gray, VP Solutions Engineering at AtScale, is the level of stigma between existing, traditional business intelligence (BI) teams and data science / AI teams. There may be limited communication between them, due to cultural differences or even physical separation for large companies with offices in different locations and time zones.
A further obstacle to AI adoption is what appears to be a cognitive dissonance where company executives have a strong desire to build up their AI capabilities, but at the same time, they see more and more examples in the media of where AI has failed or led to unsatisfactory results.
Along these lines, Weber, from IBM, distinguishes between “Narrow AI” and “Broad AI.” He illustrates a rather benign, but instructive, example of an AI algorithm that has been trained to analyze images and identify objects, such as chairs. The algorithm works great when chairs are in their regular, upright form, but what happens when a chair has been overturned and is lying on the floor? This same algorithm no longer identifies such objects as chairs with high accuracy.
This is an example of “narrow AI.” In order for AI to be more successful and more widely trusted by organizations, AI algorithms need to become more “broad.” By this, Weber means AI that is more “robust, transferable, explainable, scalable.”
Overcoming obstacles to AI adoption
Weber points out that companies with organizational learning are much better “able to achieve significant financial benefit with AI.” This organizational learning can include workshops and “technical office hours.”
Gray, from AtScale, describes how traditional BI teams and data science teams can better work together, using tools that allow the data science team to see new metrics or KPIs that the BI team comes up with in real time and be able to incorporate those into their models. Similarly, any insight from the data science team can be sent immediately over to the BI team, where any new metric or feature for a model can be vetted with the business.
Another, more unusual, approach to removing obstacles is a company-wide Kaggle-style data science competition. In another presentation at ODSC East, Scott Garlin, Anton Aboukhalil, and Noah Jensen, data scientists at Liberty Mutual, described the annual “data science challenge” that they have been running for the past few years. Before starting this competition, data science managers at Liberty Mutual tried more traditional ways of upskilling their employees and teaching data science best practices, but found that these approaches were falling short.
This led them to create this data science competition, which is open to everyone in the company, not just data scientists. This allows non-data-scientists in the company to get exposed to what data scientists are doing and can help build trust across different teams. Another important element of the competition is that it is not enough to just have the best-performing model. The winning team must also adhere to data science best practices defined by the competition administrators, which helps to overcome the earlier problems they had at Liberty Mutual with instilling these best practices in their employees.
One additional ODSC East presentation related to overcoming obstacles to AI adoption was given by Richard Sheng, Global Director, Data Science & AI Platform, at Z-Tech. Sheng advocates managing data science as products. This process of applying product management principles to data science involves first describing clearly the problem and making sure everyone is starting out on the same page. Next, the team builds a solution in an incremental approach, starting simple, and then adding features and additional complexity as needed, all the while keeping the relevant stakeholders in the loop.
Overall lessons for AI adoption
A common theme running through the various solutions discussed above is communication. There needs to be continuous communication between the data science / AI teams and the rest of the organization. An open process where other teams can see in real-time how the data science team is progressing with their models can help build trust in the AI models and allow for important feedback and iteration on those models.
If non-data-scientists can see how the process evolves and are given the opportunity to understand what can and cannot be done with different machine learning approaches, it may help break down some of the cultural resistance to these new algorithms and help everyone in the organization feel comfortable with these new approaches.