Responsible AI Institute’s Seth Dobrin discusses the mission, the challenges, and the opportunities for creating Responsible AI.
What is responsible AI? Is it something that we’ll know when we see it? Or conversely, when we discover that AI causes harm rather than the good that was intended?
There is a groundswell of demand for clarification of and protection from these AI systems that are more and more involved in automating processes that used to require human involvement to make decisions. Organizations such as ISO, governments, and the non-profit, community-based Responsible AI Institute (RAII) are responding with certifications, frameworks for affirming compliance, and guidance on how to create and operate systems that rely on AI responsibly.
So, what is responsible AI? A widely accepted definition of responsible AI is AI that does not harm human health, wealth, or livelihood.
CloudDataInsights recently had the chance to interview Seth Dobrin, president of RAII, ahead of the AI Summit New York on the mission, the challenges, and the opportunities to bring together experts from academic research, government agencies, standards bodies, and enterprises in a drive toward creating Responsible AI.
This year, co-located with AI Summit, the RAISE event (in-person and virtual on December 7th, 5:30 to 7:30 p.m. EST) will recognize organizations and individuals who are at the forefront of this effort.
Note: The interview was edited and condensed for clarity.
Q 1: The mission of the Responsible AI Institute, as stated on its website, seems very clear and obvious. Are you finding that you have to go into detail or explain what responsible AI is and why we have a responsibility to uphold it?
For the most part, no one wants to create irresponsible AI. Most people want to create responsible AI, so inherently, that’s not the problem. However, some organizations struggle to make a business case for it and it gets deprioritized among all of the other things, especially in difficult times, like where we’ve been for the last few years.
Although difficult times tend to push organizations harder to do things like this. So we do have to spend a bit of time with some companies, some organizations, helping them understand why it’s important, why do it now, that there are business benefits from it, and then also, how the RAI Institute is different from other for-profit organizations in this space. We are the only independent, non-profit in this space that’s approved to certify against published standards.
Q2: The community-based model is central to your organization. You have the innovators, the policy makers, and the businesses that are trying to conform to responsible AI standards. Do you actively have to bridge the conversation between the researchers and the enterprises?
We’re a fully member-driven organization, and community is a big part of what we do. What the RAI Institute brings forward is not just the very deep expertise of the people we have employed, but it’s also the experience and opinion of the community.
The community includes academics, governments, standard bodies, and individuals that are part of corporations. Since we align to international best practices and standards, we do spend a lot of time with policy makers and regulators to help them understand the best practices and how they can validate that companies and organizations that they are, in fact, aligned to those regulations.
Q3: Once a Responsible AI framework is adopted, an AI audit only takes weeks which is a long way from the early days when AI was unexplainable to a certain degree or simply a black box phenomenon. What is the technology lift or investment that’s required of companies that want to certify their responsible AI?
Read the rest on CloudDataInsights.com