Sponsored by IBM
Center for Continuous Intelligence

Overcoming the Barriers to Successfully Scaling AI

PinIt

AI is becoming a technological norm to advance and modernize existing digital infrastructure, now the real question is, “what’s standing in my way?”

Have you ever wondered what it would be like to reach in the haystack and actually find that needle on the first try? Or while sifting through the sand on a beach, you find buried treasure? It’s not that crazy to dream of finding these hidden gems (who wouldn’t love encountering buried pirate gold?), but it is to expect finding them every time without the proper tools or know-how.

See also: Operationalize AI in Real Time with Streaming Analytics

These activities were often written off because they traditionally represented repetitive high-effort opportunities with a low probability of success. So, some creative thinkers went and created metal detectors and absorption spectrometry so that the folks harvesting could create a cohesive strategy about which gold or needle they wanted to go find first.

The solutions to the haystack and gold mining questions offer a decent comparison to how companies are hoping to solve their big data problems with AI. Some creative thinkers created algorithms based on probabilistic determination aggregate multiple variables and sources of information to find insights in data. However, there are some common barriers that have prevented wider adoption, which is why despite a majority of CIOs/CTOs hinting at some broader AI strategy, only about 20-30% of major corporations have included it across multiple business functions.  

Why AI, Why now?

The promise of AI is immense, and the opportunity is now. With a projected revenue market of $250B by 2025 and $2T contributed to global GDP by 2030, AI adoption is starting to take place at an accelerated pace. Companies are beginning to see the benefits of early stage testing and solution deployments, which has helped inspire new avenues for expansion and curb the tide on cultural impediment to use of this technology (more on that later).

To date, successful deployment of AI has been found through identification of challenges to core business that include wasting time, creating confusion/open questions, or making manual tasks easier for employees and end-users. Currently, companies are using AI to tackle the low-level fruit, which has often resulted in cost-savings and enhanced customer satisfaction. Attention to usability for NLP, Machine Learning, Speech Recognition, and Computer Vision model builders, as well as more accessible tooling, is making it easier for anyone to start deploying AI into existing business workflows. Additionally, massive investments into open source technologies, and an emphasis on open-community collaboration4, is accelerating AI evolution to advance model output, accuracy, and extensibility. AI is becoming a technological norm to advance and modernize existing digital infrastructure, now the real question is, “what’s standing in my way?”

How do I successfully scale AI for my business?

In the course of AI adoption and successful implementation – to find the needle in the haystack – there are often a number of challenges prohibiting companies from scaling their efforts and investment. There are often several activities that need to take place before AI infusion can come to fruition and then during model creation.

Before you start, you’ll need to address the following areas:

Culture

Often times, this is the most difficult challenge for a company to overcome as the perception of AI as a maleficent force of nature to humankind. Movies like the Terminator and iRobot exaggerate the capabilities of AI, and because the most common deployments of AI to date have been designed to cut costs, there is still generally a fear and mistrust of the technology. Employees want assurance that a technology that can expedite a lot of repetitive task management/processing will actually benefit them in the long run; assurance that the skills they have learned over the years can be adjusted to meet their adjusted roles, and ultimately that AI will make their jobs better. This, in part, comes from experimentation and small deployments to help internal processes, but more importantly, it comes from trusted results – e.g., “is this model making a non-biased decision?” Culture and ethics are an increasingly important barrier that companies will need to overcome in order to successfully adopt and expand AI solutions; “without trust, there is no reason to continue.”

Access to Data

This is crucial for any machine learning model that you want to create. Having access to the right data to address the business problem is crucial as that will be used to help define feature parameters, tune the parameters, and evaluate model performance. This task includes sourcing data from multiple locations, ensuring these data are coming from reliable sources, and making sure there are avenues to ongoing data collection for further model refinement (see Data Management). Machine Learning does not have the answers to your questions right away; models need to be trained on industry-domain knowledge and split into test vs. training subsets accordingly in order to prevent drift or overfit. This is a crucial first step to getting anywhere with AI.

Data Management

Once it is known where data are coming from, then there need to be guiding principles to govern the data in a cohesive manner so that nothing is lost, important information is aggregated in the right places, and to visualize the areas in which there may be gaps. A knowledge catalog is a great tool that many companies employ to integrate with existing backend data systems and CRMs. The main benefits of having a knowledge catalog include:

  • Single repository which allows users to quickly find, clean, and use the data they need. There may be additional preparation steps needed (e.g., reducing variables, getting documents in the right format, etc.) in order to make sure things are ready for use with the AI system.
  • Management of data lakes, or massive compilations of structured or unstructured data.
  • Here is a great blog depicting the 5 Reasons to Have a Data Catalog.

Appropriate Tools and Skills

This tends to be one of the biggest challenges for companies looking to adopt AI these days; as AI becomes commoditized, companies are not sure where to start to source the most appropriate tool to solve their business need. This often leads to a lot of unsolved, incomplete POCs due to dissatisfaction with the results and an unclear strategy for human capital management.

  • Tools – technological experimentation from a variety of sources to test usability, accuracy of results, and applicability to the use-case are crucial to develop a meaningful, scalable solution. Companies like IBM, Microsoft, and Google are working to improve the overall user experience and making it easy for anyone in the organization (regardless of development skills) to build with AI. Without testing the right tools, and deploying solutions in POCs, there is no way to set a precedent for future development. Start small, fail fast.
  • However, there are a lot of very skilled engineers and developers who can quickly learn how to adjust their knowledge of core computing languages (e.g., python and Java) to solve ML problems; they just need a lending hand to do so. There are a couple of ways to approach this:
    • Upskill existing employees; cheaper, may require time investment to get to skill parity with the market.
    • Hire externally; more expensive, but has the right skills from the outset.

Deploying AI Best Practices

Understanding AI is one thing; understanding AI implementation best practices is another. By ensuring the activities above are complete, companies then will need to frame the machine model goals and apply a steadfast framework of recurring steps to ensure they’re ready to start. Contrary to contemporary belief, AI models do not have 100% of the answers right away; the models need to be trained and tuned to ensure accurate, trusted results. Similar to CRISP-DM, an AI methodology employees and iterative process of testing and evaluation. The methodology listed here incorporates elements from the before and after AI development has started. 

AI Methodology:

  1. Understand the business problem; what are you trying to achieve by implementing AI into existing workflows/applications. Assess the implications of the business problem and areas for expansion.
  2. Identify the appropriate tools and environment (Cloud, On-Prem, Hybrid Cloud) that would be best suited to host and solve the business problem.
  3. Gather data, understand the data, clean the data; CRISP-DM frameworks.
  4. Split the data into training and test sets; 60% training, 20% initial test, 20% iterative test.
  5. Begin ingesting data and do k-fold experiments to assess the performance of models against ground truth.
  6. Iterate and expand the data test set.
  7. Tune model parameters; watch for overfit, bias, drift, etc.
  8. Identify core gaps, scale to production volumes, consistently iterate.

When a company has successfully overcome the aforementioned obstacles, it should now be ready to start deploying AI models. However, POCs and experimentation mentioned in the last section are only a small piece of the picture, what companies really want is to scale their AI projects to make a serious impact on their P&L, strategy, and technological advancement. In order to do so, there are a few things to consider:

Start Small and Frame the Business Problem Appropriately

As mentioned in the AI methodology above, this is the most fundamental step to not only starting an AI project but then expanding it. Trying to take on too many components, consider too many user stories, or a misinterpretation of the features of the solution could lead to dispersion of investment and misallocation of resources. Ask a lot of questions, coagulate common themes, and make user stories – that will influence the design and development criteria for the AI model. Activities like design thinking will help provide context and set the standard for the project.

Bias

One of the hottest topics as it relates to AI these days is bias: bias can adversely affect both the company that designed and the end-user that is interfacing with an AI model. As a result, in order to scale any AI project, it’s crucial to ensure bias is mitigated and traced to prevent undesired outcomes. The trick is understanding where bias can come from – it is not only a runtime challenge; bias can arise as a result of:

  • Poor job framing the problem; defining the core concepts of both the problem and solution can remove unwanted or malicious behavior from the algorithm. This could also lead to how feature attributes in the underlying machine learning generate answers from ingested data.
  • Representative data; without an exhaustive training data set or data that are coming from a single source, machine learning models will be trained ineffectively and very quickly start demonstrating the effects of bias. Making sure that there is proper access to data and data governance will help to eliminate this challenge.

Confidence in the Investment

Without a clear strategy or immediate tangible results, it is easy for companies to lose faith. Partially due to the undue expectations set about AI performance, partially due to what can be a hefty initial investment, companies need to exhibit patience with AI model creation and deployment. True insights and ROI are not delivered overnight, and it is important to explore multiple areas where AI could help before investing. Once the choice is made, it’s important to trust the process and to expand on the little wins – especially when they weren’t initially clear.

This article is meant to serve as a starting guide for best practices and how to think about tackling some of the prohibitory issues that prevent AI adoption. There are a lot of other great resources available (e.g., CRISP-DM, coursework, etc.) that will help get the project you have in waiting off the ground. AI will help businesses find that needle in the haystack, in order to do so, implementing the right development foundational principles is key to ensuring an investment can scale from POC to production.

Zachariah Eslami

About Zachariah Eslami

Zachariah Eslami is currently Director of Product Management - AI/ML at AtScale. He is passionate about the application of data science and AI to help solve key challenges with data, and works with customers to help them develop and realize value from AI models. Prior to his time at AtScale, Zach worked as a Speech ASR/NLP Product Manager at Rev.ai, managed a team of solution engineers at IBM, and helped to productize the IBM Watson AI core application platform. Zach currently lives in Austin, Texas, in his free time, he enjoys playing tennis, photography, and playing piano.

Leave a Reply

Your email address will not be published. Required fields are marked *