SHARE
Facebook X Pinterest WhatsApp

6Q4: Responsible AI Institute’s Seth Dobrin, on Verifying that Your App Does No Harm

thumbnail
6Q4: Responsible AI Institute’s Seth Dobrin, on Verifying that Your App Does No Harm

what is responsible ai

Responsible AI Institute’s Seth Dobrin discusses the mission, the challenges, and the opportunities for creating Responsible AI.

Dec 2, 2022

What is responsible AI? Is it something that we’ll know when we see it? Or conversely, when we discover that AI causes harm rather than the good that was intended? 

There is a groundswell of demand for clarification of and protection from these AI systems that are more and more involved in automating processes that used to require human involvement to make decisions. Organizations such as ISO, governments, and the non-profit, community-based Responsible AI Institute (RAII) are responding with certifications, frameworks for affirming compliance, and guidance on how to create and operate systems that rely on AI responsibly.

So, what is responsible AI? A widely accepted definition of responsible AI is AI that does not harm human health, wealth, or livelihood.

CloudDataInsights recently had the chance to interview Seth Dobrin, president of RAII, ahead of the AI Summit New York on the mission, the challenges, and the opportunities to bring together experts from academic research, government agencies, standards bodies, and enterprises in a drive toward creating Responsible AI. 

This year, co-located with AI Summit, the RAISE event (in-person and virtual on December 7th, 5:30 to 7:30 p.m. EST) will recognize organizations and individuals who are at the forefront of this effort.

Note: The interview was edited and condensed for clarity.

Q 1: The mission of the Responsible AI Institute, as stated on its website, seems very clear and obvious. Are you finding that you have to go into detail or explain what responsible AI is and why we have a responsibility to uphold it?

For the most part, no one wants to create irresponsible AI. Most people want to create responsible AI, so inherently, that’s not the problem. However, some organizations struggle to make a business case for it and it gets deprioritized among all of the other things, especially in difficult times, like where we’ve been for the last few years. 

Although difficult times tend to push organizations harder to do things like this. So we do have to spend a bit of time with some companies, some organizations, helping them understand why it’s important, why do it now, that there are business benefits from it, and then also, how the RAI Institute is different from other for-profit organizations in this space. We are the only independent, non-profit in this space that’s approved to certify against published standards.

Q2: The community-based model is central to your organization. You have the innovators, the policy makers, and the businesses that are trying to conform to responsible AI standards. Do you actively have to bridge the conversation between the researchers and the enterprises?

We’re a fully member-driven organization, and community is a big part of what we do. What the RAI Institute brings forward is not just the very deep expertise of the people we have employed, but it’s also the experience and opinion of the community.

The community includes academics, governments, standard bodies, and individuals that are part of corporations. Since we align to international best practices and standards, we do spend a lot of time with policy makers and regulators to help them understand the best practices and how they can validate that companies and organizations that they are, in fact, aligned to those regulations. 

Q3: Once a Responsible AI framework is adopted, an AI audit only takes weeks which is a long way from the early days when AI was unexplainable to a certain degree or simply a black box phenomenon. What is the technology lift or investment that’s required of companies that want to certify their responsible AI?

Read the rest on CloudDataInsights.com

thumbnail
Elisabeth Strenger

Elisabeth Strenger is a senior technology writer at CDInsights.ai.

Recommended for you...

6Q4: Demetrios Brinkmann, on the role of community in solving MLOps’ greatest challenges
Lisa Damast
May 24, 2022
6Q4: Integration and APIs: Key Technologies and Issues for 2022
RTInsights Team
May 17, 2022
6Q4: Data Council’s Pete Soderling, on Data Council Austin and the Importance of Community in Data
Lisa Damast
Mar 11, 2022
6Q4: Mixtiles’ Rona Lankry Levy, on Not Over-Designing Software
Lisa Damast
Mar 21, 2021

Featured Resources from Cloud Data Insights

The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.