Michael Perrone, AI Partnership Director for IBM, discusses the development of the Scenario Planning Advisor (SPA). SPA is a risk management tool from IBM Research that leverages rules and machine learning to “reason about the future”.
Adrian Bowles I recently had a chance to sit down with Michael Perrone, from IBM’s TJ Watson Research Center in Yorktown Heights, New York. Here’s some highlights from our wide-ranging conversation on AI.
Michael Perrone My role at IBM is the AI Partnership Program Director, and my responsibility is to try to build partnerships between the research group and clients, partners, that have needs. In particular, we’re focused on how we can advance AI planning into risk management, and so we’ve developed a tool called a Scenario Planning Advisor that allows us to advise risk-management planning and help people plan for the future.
AB Okay, Scenario Planning Advisor. Sounds great. When I started in AI, we really looked at understanding, reasoning, learning, and planning. That was a big part of it. Lately, I see people looking at the first three, but not so much at planning as a core part of AI. Tell me what you’re doing with planning with the, was it S.P.A.?
MP Yes, the Scenario Planning Advisor. SPA. One way to think about the way AI planning is being used today is that it’s very pervasive, right?
Whenever I get in my car, I want to get directions. I’ll get a map, and I’ll, in a fraction of a second, have directions from point A to point B. That technology is so advanced that it’s essentially free, right?
But we’re taking that technology and extending it to the next level. You’ve got, in the travel problem, the question of how to go from point A to point B. What we’re doing is saying, “Well, what if you don’t know where you’re going?” Right? It’s a very funny problem, but it’s a real problem in many, many businesses, many industries, any large organization that deals with risk and has to understand, what are the challenges down the road. They don’t know.
It’s like that map problem, except you don’t know where you’re going. You just have all these opportunities, all these possibilities that are coming. We take AI plan technology and we extended it to what’s called AI plan recognition, where you try to understand what plan is evolving in real time.
AB Tell me a little bit about maybe an example of how you would use this?
MP Well, imagine you have drivers in the real world. You have inflation, so as inflation goes up, the Federal Reserve might raise interest rates. They might not, but that’s a choice that they have, and that would be a driver that causes something else to happen in the real world. So, maybe businesses adjust and they lower their spending because of higher interest rates. Then, because of that, maybe they hire fewer people, so something like increased inflation might lead to something far down the road. We would like to be able to model these kinds of things, and this is actually what we’re doing now, so that an organization can plan and see how the events today can roll out and cause impact on their businesses in the future.
Now, we are already using this within IBM. We have a chief risk officer, and his responsibility is to manage the risks that face the corporation a whole. There are very well-established risks that many
organizations deal with, but there are also emerging risks and risks that haven’t fully presented, and because of that, people really don’t know how to manage them. The trick is to make people aware of
those risks and help them focus on them before they impact your business. And, really, this is what we’re trying to do with the SPA tool.
AB When you use the term “driver” for something like inflation, do you have a predefined set of drivers that you’re looking at? Or are you adding new ones? How does that work?
MP That’s a very interesting question. This is really where science comes in. So we do create models by hand, and we’re also creating models automatically using natural language processing and a lot of the machine learning technologies that are being developed today. I think today’s technology is very much dependent on human knowledge and subject-matter experts to capture those thoughts and those drivers and those relationships that drive the real world.
We very consciously started this approach without using those advanced technologies like machine learning, because one of interesting constraints of machine learning, although machine learning is tremendously powerful, and we’ve seen the effects all over the world, you need data to train it. If you don’t have data, you can’t train it. So we very consciously avoided using data initially because we were hoping to capture black swan events. Black swan events are things that have never happened before, so if they’ve never happened, there’s no data. So, you can’t train a neural net to learn those kinds of things.
So we took a different approach where we build these models that are more like logical reasoning models, and by encapturing that knowledge, and especially by capturing it from multiple sources. You may know about one topic. I may know about another. Other people may know about other things, so if I ask you, “Hey, what’s going to happen tomorrow?” you’ll have a point of view, and I might have a different point of view, and 10 other people might have 11 other points of view, right? By combining all of those things, we can capture reasoning about these futures in ways that each individual won’t. So we can come up with hypothesis that are novel and haven’t been thought of before.
AB You talked about this stream of information. Are we talking in most or many cases, or any cases, are you talking real-time data? Or are you talking streaming data? What are you analyzing?
MP We use a variety of data sources. We’re not constrained in any particular way, but currently we can take news feeds and things like that. We can take Twitter feeds, social media, and analyze them to understand what people are talking about, what’s relevant, what’s important to a particular domain.
So if I’m talking about risk management for IBM, then I’m going to be looking at a certain set of documents. And if I’m with the federal government, I might be looking at a different set of documents.
It’s going to vary from domain to domain. But we can, using IBM Streams, ingest amazing amounts of data, as needed, to process that stuff to prepare it, for the user’s review.
What we do is we develop these topic models, which allow a user to specify what they care about, but they don’t have to say, “I care about A, B, and C.” They just say, “I care about this list of things. I don’t know what’s important today. You tell me.”
So we scour through these sources of data looking for various clusterings of these topics so that we can understand when a group of documents are talking about something that’s relevant to a particular user, even if they don’t know that these three or four or 10 topics are occurring at the same time. So this helps them focus on emerging things that they would otherwise not be aware of.
AB It sounds like you’re going up and down in terms of the level of abstraction.
MP Yes, true, and in fact, we’re pulling in so many additional pieces, like what’s called word embedding, which is a really hot topic these days, semantic understanding, phrase embedding, sentence embedding, all these machine learning techniques, cursive neural networks and things like that, to try to help find what’s relevant, and try to help build these models that are going to allow the reasoning about the future.
AB Excellent. I think reasoning about the future is probably where we’re going to have to leave it, but that’s a great phrase.
MP Thank you very much, Adrian.