In this RTInsights Real-Time Talk podcast, RTInsights editor Joe McKendrick talks with Michelle Zhou, co-founder and CEO of Juji
In this RTInsights Real-Time Talk podcast, RTInsights editor Joe McKendrick talks with Michelle Zhou, co-founder and CEO of Juji, about expanding the use of artificial intelligence (AI) by making it more accessible using AI itself. The conversation covers how she built on her work with IBM Watson to focus on the use of no-code, reusable cognitive AI platforms to democratize AI assistants/chatbots and bridge the growing AI divide.
Joe McKendrick: Hello, this is Joe McKendrick, and welcome to RTInsights’ Continuous Intelligence podcasts, the next in our series. And I am really thrilled to be joined today by Michelle Zhou the president and co-founder of Juji and you’re a leading voice and thinker and doer in the artificial intelligence field. And we look forward to learning a bit about what’s happening and how we can move forward with things.
And for starters, Michelle, why don’t you tell us a bit about your journey? I know you’ve been with IBM Watson and you’ve done a lot of work in that area as well. So you’ve been with the AI field for a number of years, since things really started getting rolling. Tell us a little bit about your journey. How did you get to where you are today?
Michelle Zhou: Sure. Oh, thank you, Joe, for having me. I’m Michelle Zhou and I have actually started my AI journey since I was a graduate student at Columbia University. I was doing my Ph.D. there. So one of the things I have always been fascinated by is how can you use a machine to help people to do something people don’t like to do, or people are not good at doing? So my thesis was on creating an AI assistant to help people create, if you will, information graphics because not everybody is a designer. Not everybody can design beautiful information graphics, but everybody wants to like the data, want to interpret the data. So I created AI during my Ph.D. study to basically look at the data, analyze the data, and automatically create visual explanations of the data. So that actually has been used for doctors and nurses to understand patient data and also for network analysts to understand the network data, networking data, and their networking performance.
So from there, when I graduated, I joined IBM Watson Research Center. And I started … Because before that I was working in this area, the system wasn’t interactive, which means that if you have a set of data, you have a user’s tasks and the visual preferences, then you automatically generate. But what if the users, once the users see what they saw from the visual illustration, what if they have further questions? What if they want to see different parts of the data. So I started a project. Basically, it’s called conversational AI for data analytics. So that was about almost 15, 20 years ago, now, so long ago. So then we are actually, the conversational interface, it is to help to use the natural language, to inquire about the data.
So for example, people might ask, “Oh, could you show me the product?” Let’s say, for example, let’s say we’re buying insurance products. “Could you show me the home insurance products, let’s say for houses under a million dollars,” or something like that. What if I buy it with car insurance? So you can see this is a context incentive, to let people inquire about the data and maybe get through the data purely in natural language. And the system will actually take the user inquiries, parse them. It understands what a user is asking for, automatically comes up with the data and the right form to actually explain to the people how here it’s the data you inquired about. So that’s the one.
So then there, very interesting what is in this kind of a project, we only care about a user’s preferences of data and users preferences about the presentation, but not individual differences. When I say individual differences, it is for example, what’s your personality like? What’s your cognitive style, whether you like a more story-like type of data story or where you like a more fact-based and number-like data story? So we didn’t take it into consideration.
So then I started another project at IBM called IBM … then became IBM Watson Personality Insights, which means that we want to use the user behavioral data like communication data to better understand individual differences. So for example, are you an extrovert? Are you an introvert? Are you very collaborative or are you more about the lone learner? So then I started Juji as a startup with my cofounder, who happens to be a computer scientist and a psychologist and who co-invented IBM Watson personalities with me.
So we started this one. We wanted to create, really, a new generation of AI assistants. We call them cognitive assistants. So basically, they should interact with people. And then they help organizations to augment their workforce, to automate different types of tasks, especially pretty time-consuming, labor-intensive tasks that humans really don’t like to do. For example, having a conversation with strangers, which not everybody wants to do, or nudging people to do something they don’t like to do. For example, finishing up your homework or doing exercise, or checking your health status every day. So we should leave it for the AI assistant to do. So that’s what we are today.
Joe McKendrick: That’s really fascinating. As a consumer, I use a Google Assistant here at the office and I have Alexa at home. But this sounds like what you’re working on is more advanced than I guess the relatively simple questions or requests for songs or whatever, that a consumer user would use now on AI, on a personal assistant.
Michelle Zhou: Correct. So I’m glad you mentioned that contrast. So it’s a really, so the one normally people use, like Alexa or Google Home, they are more what we call the user-driven interaction, that user will say, “Could you tell me what the temperature outside is like?” Or, “Could you help me find the song I like?” So it’s more user-driven. The systems are very passive. So in our case, we want to support truly interactive AI assistants. So it’s not just user-driven, it actually could be driven by both the machines and by the user. For example, a very simple example.
So let’s say you are going to somebody’s university website, you want to look for an online program to apply to. So the assistant can actually give you a tour of the online program because that’s what we just talked about Zoom because not everybody knows Zoom very well. So let’s say if your assistant could have given you a tour of them, but on the way, during the tour, you can ask any of the questions. So for example, if the Zoom assistant will tell you, “Hey, you can just start this one, test your voice, look at a picture. And you ask, “Oh, I don’t like to show my background. What should I do?” So in this case, the Zoom says, “Oh, now you can do this one, you can change your background or maybe mute your background, then continue the tour.” So it’s almost like what we are talking about. It’s a true conversation.
So that’s what Juji has really developed as. That’s why we call it cognitive AI assistance. This is what’s called cognitive intelligence. Unlike the regular AI, cognitive intelligence, which means it is not just having language skills as you have experienced from Alexa or Google home, they also have what we call advanced human skills, especially soft skills. So one example of soft skills is what we call active listening. So, that means not only do the AI assistants understand what users are saying, but they are actually to verify the emotions, paraphrase what they say, summarize what the users say, to being very attentive, as well as being very concerned with what the user cares about. And then actually then make the trustworthy and empathetic conversation. So in this case, think about how you are really converse with the person. And you can have that almost like a very personable relationship.
Joe McKendrick: We hear about, for example, with call centers or contact centers and we call in and we get a virtual assistant. And you hear nowadays that they can sense if a customer is angry, for example, if a customer is frustrated. They’ll either cut them over to a live operator, or I guess it will attempt to address their frustration. And it sounds like you’re building on that type of application. Right?
Michelle Zhou: Right. So actually we have gone beyond that, already. So the first one, what we call active listening, which means that is to sense what you said, users’ sentiment, user’s emotions, and to be able to actually rephrase that, paraphrase that. And the next one example, I’m glad that you’re onto this one, we call, “The reading between the lines.” So what it means is that you can think about it the way you speak with a psychologist, that psychologists always try to understand what’s beyond what you just said. So what your unspoken needs and wants, so what your emotional signature. So it’s not about just the moment of a sentiment, the moment of emotions being shown. What’s the signature look like? So that’s why we call the reading between the lines.
So for example, our AI assistant dynamically analyzes a user’s conversational text and tries to detect what we call the individual differences. So individual differences, they include what are your passions or interests are, and what are you’re good at? Some people are very good at logical reasoning. Some people are very good at storytelling. And some people, how they handle life’s challenges. For example, some people under pressure are very calm, and some people under pressure can be a little bit off. So you can see because you understand the underlying unique individual differences, the characteristics of each user. Then they can better help each user.
Just to give you an example we talked about earlier, a student prospect, a student who’s looking for an online program, but this person may be worried about the financial burden, because anything, an online program, you need to pay for the tuition. In another case, if the assistant detects such unspoken needs and wants, can really guide them, can tell them to say, “Hey, you know what?” Especially for a person who is very worried, and also wants to be very independent then said that, “We have lots of financial aid programs. We have scholarships, so we can help you basically in your journey of getting a degree or maybe advancing your career.” So you can see being truly very personalized, that if another person has the same worry, that this person is much more methodical, much more, let’s say, careful. So then you will use the different ways of presenting the information.
Might say this one, “Now I’m going to present you with different types of financial aid options. You can choose the one which best suits your lifestyle, or maybe your work style.” So you can see, you can truly personalize one, even the people that have needs may be the same, want to find the program that you enroll in, but underlying, they have their own psychological needs, psychological wants.
Joe McKendrick: It almost sounds like the cognitive AI assistants also adopt their own personality, their own set of behaviors to adapt to the…right?
Michelle Zhou: Actually, we did a lot of work. Yeah. We did a lot of study on that. They’re not adopting yet. So we’re still in research. And we’re wondering, because the research has shown conflicting results. So some research shows that … Our research also shows that people like to interact with AIs who have a similar personality. So if I’m very extroverted, I like to interact with the AI with the extrovert personality. But some research actually defies that, showing that people like to interact with AI, who have the opposite, we say complimentary personality. So if I’m very chatty, I prefer AI not very chatty, who’s more reserved. So that’s why we haven’t put it into production yet, because we’re still trying to figure out which way users prefer more. It requires a little bit more research in that particular regard.
Joe McKendrick: Yeah. You talk about the democratization of AI, which is really a great concept. And do you foresee AI being part of smaller footprint devices, say our smartphones, will they interact through smartphones? Maybe there’ll be some AI on that. Or maybe the devices that may be integrated into other systems. Is that something you’re looking at as well?
Michelle Zhou: Yes. Correct. Actually, you touched upon one aspect of democratizing AI. So thinking about it, back in probably the 1970s, what do we call democratizing computing. Before that, IBM had these mainframe computers, or maybe the small computer. So really people couldn’t afford to buy it, because it’s too expensive. Not just that, the second part of it is that not many people can use that kind of computer because they can’t program. They don’t know the programming languages. They can’t really use them. So with the advent of the personal computers, the PCs, the Macs, so that’s really democratizing the computing, is not only … Almost everyone now, every company can afford to buy a computer. Everyone who has very little knowledge, who is not a programmer, or who is not an expert in computer science, can operate a computer.
So we have a very similar idea in democratizing AI. So if you say that we should see AI actually run on any type of device, including the smartphone. We were already doing that. The second part of it is, beyond that part of it is that we want to enable anyone, literally, everyone, as long as they can do PowerPoints, they can do spreadsheets. They should be able to set up, deploy, and manage a custom AI assistant. Like I just said, with all of the cognitive intelligence on their own, no coding, they don’t need AI expertise. They don’t need training data, because we have trained already, so they can quickly customize it and deploy it and manage it. So that’s what we really mean about democratizing AI. So it means they can just go adopt and then quickly customize it and use it for their benefits.
Joe McKendrick: Wow. That sounds pretty exciting. So someone like me, or if there’s a person who doesn’t have a technical background could begin to set up these types of these applications, then, that could.
Michelle Zhou: You should yeah.
Joe McKendrick: Customers.
Michelle Zhou: Joe, do you do PowerPoints? You know PowerPoint? You know spreadsheets. So we made the barriers to entry really, really low, literally meaning that people can do PowerPoints. People can do spreadsheets. They should come, be able to use our platform to create a very powerful AI assistant that is customized and also customized to their context, their task. Because most of our users are, for example, recruitment specialists, marketing managers, product managers, and user research, there’s a researcher. So that means that they’re not definitely computer scientists. They don’t know how to program. They don’t need to know how the program. And they are basically just the kind of general knowledge workers. So they are able to set up a very powerful AI assistant on our platform.
Joe McKendrick: That’s wonderful. And how do you see … As you go forward with this, will there be a Juji-branded product that customers will be able to download or buy? Or will you be working behind the scenes with other application providers to build that? What will we be seeing in the near future from you folks?
Michelle Zhou: Okay. I think both. So one way it is that already we have clients come over just to use our platform to create the custom AI. Deploy, we host it. And then another one, it is that we also partner with other companies. So basically they’ve become our channel partners. So their customers will use theirs in conjunction with ours to create an AI assistant. For example, with the voice. So because at Juji, we don’t do voice one. So then as somebody who is specialized in voice recognition and TTS, then they can actually combine with our technologies to create a very smart cognitive AI assistance with the voice as well, always a face. So we can also combine all those technologies together. So one is if you said … It’s more, so we serve as the conversational AI engine, cognitive engine, if you will, to those potential partners. And then in the meantime, the people, if they just want the text-based AI assistant, can just directly come to our platform and use it.
Joe McKendrick: Are you close to passing the Turing Test where someone may not be able to distinguish it?
Michelle Zhou: Somebody said that. But we don’t know whether we should use that as the standard to test the quality. Because the reason, I’m not sure if you’ve heard about this one, actually back in the 1970s, a particular professor’s Turing … A chatbot that passes the Turing Test, that was the first one. The reason is that it imitated a patient who has a psychological disorder. So that’s why. So that’s why it passed because nobody knew what he’s talking about. So I’m not sure whether that’s a good criteria or not.
So I think our criteria would be more concrete? Can the AI actually help you finish your task? Can the AI really deliver that user satisfaction? I think it’s more practical and it’s more actually also measurable, from a business point of view. Because if we help, let’s say, a university to help their prospective students or existing students, or even graduates and whether this AI has done its help. So what’s the outcome of the help? Do they have more enrollments and they actually have seen that? Yes. Do they have a higher success rate and retention rate of the students? Yes. Or will their graduates or alumni come back, more of them will come back and continue their education? That’s very concrete … Actually, I would say outcome, success outcomes, or usefulness of the AI, versus comparing just the passing.
Joe McKendrick: And one of the concerns of AI is always the data, the amount of data that’s needed, big data for example you need large datasets to identify and train, and so forth. How do you see that happening? What you’re working on requires large datasets.
Michelle Zhou: Precisely. This is a great question. That’s why I was talking about democratizing AI because many organizations don’t have that kind of data. For example, they don’t even have that kind of AI yet. So that’s why we are, as a company, as a platform company, and we have been generating our own data, collecting our data. So yes, our models are trained on massive amounts of data, because we already trained data. We should just let other people reuse it. So it’s almost like a transfer of intelligence.
I was on another call just last week, people were asking me about this one. And I said, “It is the beauty of what we are working on, what Juji’s doing is this.” You say that we’re teaching AI everything from scratch, reading between the lines. We actually embed intelligence in that. So when you actually adopt the AI, that AI came with those, born with this intelligence. Built-in, we call it built-in intelligence. So we can really transfer intelligence from one to another one.
So another thing, for example, we’re working with universities to help their recruitment program. So then you got lots of prospective students’ questions there. So in this case, those data can be actually used for other universities as well. When I say the data, not the answers per se, but the questions. So students ask a question, phrase the question, and then on top of that, we automatically generate more training data. So the universities don’t need to do that. So when they originally come to us and say, “Hey, we don’t have the data.” So you don’t have to worry about it. We already have it. So you can just have a jumpstart of your AI assistant. So that’s what also I meant exactly about democratizing AI, which means that you’re packaging the intelligence. You pre-build intelligence, so other people can adopt it, and can reuse it instantly.
Joe McKendrick: Just like scientific research, you can build on existing research and keep improving things. Right?
Michelle Zhou: Yeah. So it’s the most like you’re growing as a kid. The kid has the intelligence, the beginning has very good intelligence. So once this kid has more intelligence, so you keep it, not just keep it, you transfer this kid’s intelligence into another kid, so the other kid doesn’t need to learn from scratch. So that’s very powerful.
Joe McKendrick: It is. Absolutely. And Michelle, what do you see happening over the next 5 to 10 years? What are you looking forward to seeing happening? What will the world look like by 2025 or even 2030, especially with your technology?
Michelle Zhou: I do think we’re going toward a more, actually made-up optimistic scenario of the movie Her. You remember the movie Her?
Joe McKendrick: Yes. Yes. Great movie.
Michelle Zhou: Right? So this means that your AI will perhaps know more about you than you know yourself. And your AI will know what you want before you know even what you want. For example, you need to save money. The AI already knew in advance that you need to save money. Or you need a new degree in order to be more employable. The AI probably will know that before you know it. So that’s what I see. You have a true, what I call personal assistant, personal companionship, in this case, the AI companion, who can really understand who you are, what your needs and wants are, and help you in the best way to benefit you. So that’s why we also go to our topic of responsible AI, because with that level of understanding, if we’re not going to enforce this responsible AI, if this technology falls into bad people’s hands, it could be abused and could have bad consequences. That’s why we also instill this sense of responsible AI, which means that we want to make sure AI helps people basically in the best way to benefit people.
Joe McKendrick: Wonderful. Wonderful. And absolutely the work you’re doing in that area is really moving AI in a positive direction to benefit people. And we really appreciate you being able to share this with us in our podcast today. Again, I’m speaking with Michelle Zhou. She’s the CEO and co-founder of Juji. And thank you very much Michelle for joining us today. We really enjoyed having you on.
Michelle Zhou: Thank you, Joe, for having me. Thank you. Bye.