Governing the Use of AI

PinIt

When it comes to governing the use of AI, compliance starts from the top. It is vital that AI policies are not developed in a vacuum and that company leadership is also involved with their development to ensure that the effort supports the strategic direction of the organization itself.

Up until a little while ago, we were not really focused on the use of AI. Yes, it already existed, and many of us used it in our daily lives without really thinking about it. While relying on NLP and machine learning, Apple Siri falls under the category of narrow AI. The GPS is your auto – while not AI in and of itself, it uses AI to determine the best way to get from point A to point B. Then came the release of ChatGPT to the general public, and the controversy over the use of AI began in earnest.

We have seen AI be used more frequently, even in this short period of time, from the idea of ‘deep fakes’ to an attorney using AI to develop a legal argument that was presented in court. Note, however, that the cases cited by the AI generator were made up, and the attorney forgot to verify them. Regardless, I will allow the legal pundits and moral ethicists to argue the finer points on its actual use of AI in business and schools but understand that this is one of those cases where it is going to be very difficult, if not impossible, to put this genie back in its bottle.

How, then, are we supposed to govern the use of AI in business?

This needs to be done in much the same way that any asset or tool is governed. We have rules associated with the use of data: Data Governance ‘regulates’ who has access to specific data points, where and how this data can be changed or used, and its definition. We have licenses to determine who has access to specific tools and applications. We provide badges, IDs, and passwords to safeguard both our physical and digital workspace. The best solution, in this case, is to think of AI in the same way as we would think of any other tool or asset that our user community has access to.

See also: Data Governance Concerns in the Age of AI

Policies to help govern the use of AI

I started thinking about a series of policies that I might put into place in order to govern the use of AI, but then I thought that I might go one step further and ask ChatGPT itself what it thought a standard set of rules might be. I prompted with, “Set out a workplace policy for corporate use of generative AI tools.” What follows is ChatGPT’s response:

Purpose: The purpose of this policy is to ensure responsible and ethical use of generative AI tools in the workplace. Generative AI tools refer to software programs that use machine learning algorithms to generate new content, such as text, images, and videos.

Scope: This policy applies to all employees, contractors, and third-party vendors who use generative AI tools on behalf of the company. 

Guidelines for Use:

  1. All generative AI tools used by employees must be approved by the IT department or other designated approval authority before use.
  2. Generative AI tools should only be used for legitimate business purposes, such as content creation, data analysis, and process automation.
  3. Employees should not use generative AI tools to create fake news, defamatory content, or other forms of content that violate ethical or legal standards.
  4. Employees should not use generative AI tools to replicate or modify existing content without permission from the original author or owner.
  5. Employees should take reasonable steps to ensure the accuracy and reliability of the output generated by generative AI tools, such as testing the output against known data and using multiple sources to verify information.
  6. Employees should report any issues or concerns related to the use of generative AI tools to their manager or IT department immediately.

Data Protection:

  1. Employees must protect any personal or confidential information generated or used by generative AI tools in accordance with the company’s data protection policies and relevant laws and regulations.
  2. Employees should not use generative AI tools to generate or process sensitive data, such as health records, financial data, or personally identifiable information, without the proper authorization and security measures in place.
  3. Employees must comply with all data privacy and security measures put in place by the IT department or other designated authority.

Training and Awareness:

  1. Employees should receive adequate training and guidance on the proper use of generative AI tools.
  2. The company should provide regular awareness campaigns to educate employees about the ethical and legal considerations of using generative AI tools.
  3. The company should also provide guidelines and best practices for using generative AI tools in a responsible and ethical manner.

Non-Compliance:

  1. Non-compliance with this policy may result in disciplinary action, up to and including termination of employment or contractual agreement.
  2. Any violations of this policy should be reported to the appropriate authority, such as the employee’s manager or the IT department.

This policy is intended to provide guidance for responsible and ethical use of generative AI tools in the workplace. It should be reviewed and updated periodically to ensure it remains relevant and effective.

See also: Why AI Needs Us More Than We Need AI

What’s next? 

All in all, not a bad start. Many of the areas that I would have considered are included and are defined in sufficient detail and in easily understood language to ensure that they can be easily followed by the user community. The next question is: Does it go far enough?

There are some items here that, perhaps, are not definitive enough. The last paragraph indicates that the policy should be reviewed and updated periodically. A true policy should indicate that it is to be reviewed and updated every X months, quarters, years – or at least as often as other corporate policies.

There are other areas that I might add to such a policy. If you are using the tool to generate text, reports, etc., then give credit (or blame) where credit is due. Part of the policy might be to disclose that the tool was used to create text/imagery or analyze data for a specific report or use. You might also want to have a statement regarding ‘trust, but verify.’ Like the attorney previously mentioned, if you are going to rely on something defined by AI, then do the research and make sure it is right, or take the blame and not pass it on to the tool.

Tool access should also be included. Both to those who have access internally and giving access to the tool to those outside of the organization.

Of course, compliance starts from the top. It is vital that AI policies are not developed in a vacuum and that company leadership is also involved with the development of the policy to ensure that it supports the strategic direction – both business and ethical – of the organization itself.

Note also that the US Government has already defined a number of policies for the use of AI, as have major organizations such as YouTube, Amazon, and Microsoft. Perhaps this is something that your organization should be considering as well.

Aaron Gavzy

About Aaron Gavzy

Aaron Gavzy is a Lead Data and Business Strategist focusing on Global Data, Analytics, AI, and Advisory Services. He has over 35 years of demonstrated experience in the development and delivery of innovative strategic directives for solving business and tactical issues across a variety of industries. He is a recognized thought leader in the areas of Data Strategy/Governance/Privacy as well as Analytics and Organizational Change. A speaker and author who has led consulting practices at large multi-national consulting firms, he has also served as the CIO of a Health Care firm and the CFO of an Advertising Agency.

Leave a Reply

Your email address will not be published. Required fields are marked *