Artificial intelligence (AI) may soon be ubiquitous in next-gen tech for real-time, replacing “human intel” for mundane tasks — but it needs monitoring.
Artificial Intelligence (AI) is in the news all the time. Depending upon the commentator, it is either the next step in, or it is a great danger. The reality, of course, is likely to depend upon the specific AI in question.
Even though this technology is relatively new on the scene, there are already quite a number of catchphrases in use including AI, machine intelligence, machine learning, and deep learning. Each of these phrases is typically used when speaking about automating tasks that are believed to require human intelligence. We’ve seen the emergence of this type of technology to help systems serve people’s needs better at a lower overall price.
Is there more than one type of AI?
AI can be seen as an outgrowth of earlier forms of computing that were designed to automate dealing with outside events and systematically responding. As with most forms of computing, there are many different approaches. They break down into either AI based upon a clear understanding of the inputs that will be presented to the program and what it should do, or AI that needs to learn from the input and determine for itself what to do.
Some forms of AI are focused on a single, narrow task and are useless outside of that limited area. AI responses are typically guided by statistical analysis done by developers or data scientists. Other forms of AI are intended to have a much broader use. Intelligent assistants, such as Apple’s Siri or Amazon’s Alexa, listen to spoken or written commands, determine the user’s intent, and then take action.
See also: AI brings out the real-time side of B2B
Most of these forms are based upon the automation being able to detect patterns in the data, learn from those patterns and then take specific actions. In these cases, massive data sets are examined by the automation to discover patterns. Software imitating neural networks that actually program themselves is used in these systems. Often these projects are developed using an iterative approach. Developers measure the program’s outputs against a pre-determined set of metrics. Configurations that appear to satisfy the metrics are kept and those that don’t are discarded.
This process is continued until the results work well enough to address the requirements at hand.
Since the programs learn for themselves from the data sets provided, developers don’t know how the programs work internally. Let’s focus on this case.
Can we know how AI chooses?
In order to gain a better understanding of how an AI program made a choice, we need to know a bit more about how it was developed.
Was it created using a custom tool that was developed for this specific project, or was it crafted using a commercial, off-the-shelf tool? Many suppliers have developed their own tools. Since these AI programs were developed using proprietary tools, enterprises may not be able to learn how they were created or operate.
Since this type of AI program develops its own understanding of the patterns in the data, what’s included in or excluded from these data sets is key to understanding how they work. As with the tools used to develop AI programs, some data sets, such as those used by pharmaceutical companies, are proprietary.
See also: How AI is transforming retail
But why should we care about which tools were used to construct an AI program, what data sets were used to train that program or any of the details concerning how that program was built? When everything works as expected, that understanding may not be necessary. When something doesn’t work, say when an autonomous vehicle crashes, these factors become important.
In the case of autonomous vehicles or weapons, if something doesn’t work as expected who is accountable for the failure? Is it the manufacturer, the tools developer, the data set supplier or the enterprise that licensed the technology who is culpable? Increasingly, governmental and regulatory bodies are demanding access to this information to assess responsibility.
Increasing levels of regulation?
Microsoft points out that everyday citizens might be hurt or find themselves at a disadvantage by AI that doesn’t function as intended. Commentators, such as the New York Time’s Oren Etzioni, believe that “an AI system must be subject to the full gamut of laws that apply to its human operator.”
It is clear that regulations and laws are going to appear soon to govern this technology. The European Union’s General Data Protection Regulation (GDPR) is likely to require financial services companies to disclose how decisions impacting consumers were made.
What is increasingly clear is that development and security management within enterprises need to know what data sets were used for training AI, how they changed over time and what were the goals of the training process.
Most enterprise IT organizations already have come to rely on auditing tools to help them track changes made to their own applications. They are going to expect similar tools be made available to help them manage AI. The time is now to start thinking about how such tools should work, who should be responsible for managing this information within the enterprise and how this information should be reported.