Using AI to Determine What Predictive Models Will Actually Solve Your Problems


Thoroughly exploring data with AI before a predictive model gets built is a way to be sure all the important factors in complex datasets will be tracked.

Predictive modeling generally involves gathering up available data and evaluating what human behaviors, atmospheric conditions, or consumer trends in that record are likely to come about again. This has worked really well for things like automated online shopping recommendations, weather forecasting, and truck route and schedule optimization.

But the present isn’t always an echo of the past. At times of disruption, it’s much harder to see what may be around the corner. This is when predictive models are most likely to get things wrong.

Sometimes, unanticipated consequences are harmless, like the bump up in the number of MLB home runs that are associated with climate change. Other times, they’re devastating, like the failure to assess the extreme risks of subprime mortgages, which led to the 2008 Great Recession.

AI is now being used to improve predictive modeling by detecting and making sense of all the characteristics in a scenario before a model gets designed. This “model precognition” allows data analysts to understand all the characteristics at play in huge data sets. By detecting factors that aren’t intuitive or haven’t been significant in the past, analysts uncover insights data teams can use to build predictive models that track what’s actually important—and swiftly iterate so they stay true.

Here’s how this AI technique, Intelligent Exploration, is helping data teams better see the future—and build predictive models that avoid performance degradation over time.

When history doesn’t repeat, predictive models can short-circuit

As part of the US Department of Defense community in 2019, I was part of a team trying to help mitigate some of the profound disruptions caused by COVID. We were tasked with examining how our supply chain predictive models were failing us and how to improve them going forward. There turned out to be two main shortcomings.

One was that supply chain models were frequently built around the concept of Just-In-Time inventory management. JIT has been an effective tool for reducing waste and increasing productivity. But a flaw came to light during COVID; our JIT models insufficiently accounted for fragility. Lean pipelines were great for efficiency, but the resulting limited inventories made supply chains prone to break in unpredictable ways. 

Toilet paper supply is a simple illustration of this. It’s a bulky and relatively inexpensive commodity, so no retailer was accustomed to keeping a lot of it in stock. Plus, its consumption patterns are typically so stable that there is little need to overstock it. Then COVID caused a huge hiccup. Demand wasn’t stable at all, shelves emptied from panic buying, and stores had to start rationing rolls because production couldn’t keep up.

Assumptions about directionality were another dimension that was poorly represented by the predictive models we were using. N95 masks were one of the primary strategic shortages we encountered during the early days of the pandemic. Our models had been based on China being the largest supplier of surgical masks, which it historically was. But suddenly, this exporter became a significant importer of masks, including from the US and Europe, breaking the predictive models we were using to ensure a sufficient mask supply.

See also: Predictive Intelligence Only Works With High Quality Data

AI-guided data exploration: a leap forward for predictive models

Modeling can be time-consuming and expensive, which can make it difficult to abandon a predictive model that is no longer working as it should. AI-enabled data exploration can help mitigate this by ensuring the right things are modeled in the first place. It can shed light on all the characteristics in a supply chain (or maintenance program, manufacturing line, or finance ecosystem) before you generate a predictive model.

How? Intelligent Exploration uses AI to look at all important dimensions of an organizational problem or system so users can see and assess known knowns, unknown unknowns, and everything in between. Here’s a simple breakdown on how this is evolving the predictive model development process.

How Intelligent Exploration is improving predictive modeling

 Before AI-driven data explorationWith Intelligent Exploration  
1.       The analyst gets handed a huge dataset and must figure out where to start and how to apply all the data. Analysis is often limited to a handful of factors.The analyst gets handed a huge dataset. Out-of-the-box AI routines quickly explore complex data sources with all potentially relevant dimensions included.
2.       The data team spins through cycles of queries in search of direction. The team forms a hypothesis or problem to be solved to make the work manageable.  The analysis considers dozens of dimensions within the data at once. AI surfaces what’s important. Insights are rooted in data, not hypotheses. The analyst gets immediate direction.
3.       Findings from a best-guess analysis form the basis for building a predictive model. Bias may be baked in. Relationships with significant business impact may be overlooked.Findings reveal drivers, anomalies, relationships, and patterns within the data. The analyst gets recommendations that target the highest-value problems. The data team builds a predictive model addressing these factors.
4.       The predictive model addresses the initial question, but high-value factors go unexamined.The predictive model targets the key problems at hand and all impactful drivers, including unexpected ones.

More ways AI data exploration is advancing modeling

AI-driven data exploration works at scale. Hundreds of columns of data can be analyzed at once. This makes it far more likely that insights, areas of opportunity, risks, and recommendations will closely reflect reality.

Plus, findings are delivered in simple language and intuitive formats like 3D visualizations that reveal the interactions among all this complex data. This transparency is important because AI isn’t magical or perfect. Analysts, subject matter experts, and data teams can more easily spot anything that seems illogical, investigating it before a predictive model gets built.

Intelligent Exploration also enables a virtuous cycle of improvement in predictive modeling. The solutions with the highest potential payoff are identified first. As things change, new areas of improvement can be discovered and added to the model.

Predictive models that solve problems in a changing world

In a model that looks at a large system like a global supply chain, even small errors in calculation can have a butterfly effect—a cascade of consequences that are extremely hard to foresee. Thoroughly exploring data with AI before a predictive model gets built is a way to be sure all the important factors in complex datasets will be tracked. Models can then point to better decisions, policies, and initiatives.

Further, AI data exploration detects signals you may not be looking for, such as anomalies that indicate a shift is underway. It can provide advance warning that it’s time to update your predictive model. So, predictions stay on course even when events depart from the norm.

Kyle Rice

About Kyle Rice

Kyle Rice is the Federal CTO for Virtualitics, a data analytics company. Virtualitics’ patented technology is based on more than ten years of research at the California Institute of Technology and has been tested, proven, and leveraged by the federal government and large enterprises.

Leave a Reply

Your email address will not be published. Required fields are marked *