Tech trade body ITI has published a guide on transparency for AI systems, aimed at helping policymakers with artificial intelligence regulations.
Global tech trade association ITI has published its Policy Principles for Enabling Transparency of AI Systems, a guide to help inform decisions made by policymakers in the field of artificial intelligence.
The guide discusses how artificial intelligence developers, distributors, and operators can be transparent to users on when an AI system is active, and breaks down the different meanings of words such as transparency, explainability, interpretability, and disclosure.
It also provides guidance for policymakers as to the reasons for being transparent with users, and what AI systems require the operate to inform users beforehand. High-risk systems, which can have real-life effects on the end user, are prioritized in the guide as transparency is more necessary than with non-risk or minimal-risk systems.
“Transparency of AI systems has rightfully been a prime focus for policymakers in the U.S. and across the globe,” said ITI’s President and CEO Jason Oxman. “Regulations must effectively mitigate risk for users while preserving innovation of AI technologies and encouraging their uptake. ITI’s Policy Principles for Enabling Transparency of AI Systems offer a clear guide for policymakers to learn about and facilitate greater transparency of AI systems.”
Many legislative bodies are preparing or have published documentation on AI regulation, including the European Union. It is expected that in the next five years, systems that are considered too high risk, such as affect recognition and government biometric identification, will be banned in certain parts of the world. High-risk systems, such as those that calculate a mortgage application or credit score, will receive more intense scrutiny from national and international bodies.
Publishing this guide may provide the entire spectrum of policymakers with advanced guidance as to how to navigate this new maze of regulations.