Responsible Generative AI Consortium Established

PinIt

The Responsible AI Institute has launched a new consortium dedicated to developing generative AI tech safely.

The Responsible AI Institute has announced the formation of the Responsible Generative AI Consortium, aimed at accelerating the development of responsible AI practices in generative systems through collaborative learning, policy advocacy, and testbeds to trial generative systems in real-world contexts. 

Generative AI is responsible for the huge surge in interest surrounding AI pursuits, with mentions of AI on earnings calls skyrocketing in the first quarter of 2023. OpenAI’s ChatGPT is the face of this research, but there are thousands of projects getting the greenlight because of the term ‘generative AI’, even some that don’t have little to do with the technology.

SEE ALSO: Riding Out Automotive Market Uncertainty with Digital Twins

A lot of the internal functionality of these generative systems remains unknown, with OpenAI and others becoming more secretive of the algorithms and formulas which power these apps. This is where the Responsible Generative AI Consortium comes in, which is a collection of corporations, technology providers, and academic experts brought together to improve understanding and development standards and safety. 

“We are in the middle of rapid advancements and adoption of generative AI and navigating the responsible AI landscape is proving to be a formidable challenge for all,” said Manoj Saxena, founder and chairman at Responsible AI Institute. “Now, more than ever, we need to work together to make AI safe and aligned with human values. The creation of our consortium, with its practical testbeds and GenAI Safety Ratings, is a vital step towards our mission of helping AI practitioners to build, buy and supply safe and trusted AI systems.”

The first in this series of consortium will be focused on healthcare, which is expected to be a vital industry for generative AI. Researchers are already using generative systems to help with the advancement of new drugs and new lines of research, as the systems may be able to cut down on the years spent searching for the right proteins and structures. On a more personal level, chatbots may be used by healthcare providers as the first point-of-access to a hospital or doctor, letting a patient record their symptoms and receive advice on the next steps. 

A wide selection of experts are involved in the consortium, including the NHS, Harvard Business School, the Turing Institute, and the University of Cambridge. These organizations will work in collaboration with the Responsible AI Institute to develop a ratings system, similar to a FICO score, which looks at various metrics such as bias detection, hallucinations, privacy protect, transparency, and accountability. 

“The responsible use of AI in the healthcare industry has immense potential to improve human well-being, accelerate scientific discoveries and transform healthcare as we know it for patients and providers,” said Dr. Hatim Abdulhussein, national clinical lead for AI and Digital Workforce at NHS England. “It is vital to work collaboratively across disciplines, and engaging with this Consortium is necessary to understand the guardrails to support the development of safe and ethical AI that will build confidence in this technology for both the healthcare workforce and patients.”

While generative AI looks to be the biggest technology trend of 2023, it is not yet clear what lasting effect it will have on the industry as a whole. There are research projects that have large aims, but we are still in the early stages of really figuring out what these large language models, coupled with some of the best-in-class algorithms, are able to achieve when applied to real-world problems. This consortium should shed more of a light on the potential of it, while reducing the likelihood of abuse.

David Curry

About David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Leave a Reply

Your email address will not be published. Required fields are marked *