Why AI Needs Us More Than We Need AI

PinIt

With the rapid adoption of AI, it is imperative that companies regularly evaluate their technologies and processes to ensure they are ethically and responsibly collecting, training, and testing data.

Generative AI (GenAI) has taken our world by storm. From companies integrating the technology into their products, services, and operations to everyday Americans experimenting with these tools for personal use, there is no escaping its broad reach and impacts. This widespread adoption of GenAI has given a snapshot of the potential business benefits of this technology as it optimizes efficiencies, drives down costs, and boosts productivity. People have also seen firsthand how a GenAI chatbot can make their lives easier as it can be used to prepare for a job interview, answer homework questions, and even be great for general brainstorming.

The exponential rise of GenAI, however, has also come with valid concerns that not all algorithms are being developed and maintained to produce safe, ethical, and accurate outcomes. While bias in AI models is not typically implemented out of malice, this can be the result when the datasets used to train them aren’t large enough or representative of our diverse population. When this happens, it can have far-reaching negative impacts and be detrimental to society. This is especially so for underrepresented and minority groups.  

The Domino Effect of Bad Data

Bad data is poor-quality data that is either inaccurate, biased, or irrelevant to the algorithm it is powering. So, what exactly are the impacts of ‘bad’ data?

  • Product Impacts: Algorithms will underperform and provide inaccurate results. My company ran a recent survey that found many Americans have already come across AI solutions built on bad data, as two in five (43%) believe bias within an AI algorithm caused them to be served the “wrong content.” When bad data seeps into algorithms, the AI-powered product cannot be trusted and is no longer useful as it becomes impossible for users to decipher what is correct.
  • Organizational Impacts: Creating AI-powered solutions that are not producing accurate results can have a serious impact on brand reputation and trust and negatively impact a company’s bottom line. Trust is key with consumers, and once a brand loses a customer’s trust, it is hard to gain it back, as Gartner found 81% of customers refuse to do business with or buy from a brand that they don’t trust. 
  • Societal Impacts: AI use cases are growing in popularity across all industries because of their many business benefits, from efficiency to innovation. AI has the power to impact seemingly every aspect of our lives, from bank loan applications to recruiting decisions to health diagnoses. The potential negative impacts of biased or inaccurate algorithms can be detrimental to all. 

The fact remains that GenAI is a powerful technology that will greatly benefit society if developed responsibly. So, what is the solution? A human-in-the-loop approach.

See also: Responsible Generative AI Consortium Established

Humans Are Critical in Ensuring Good Data

Contrary to what is often in the headlines, humans are, and will always be, a focal point to ensuring the datasets these algorithms are trained with are accurate and ethical. This is because humans can do something AI cannot – review data with empathy and context.

AI is only as smart as the data it is fed. It relies upon terabytes of data to be able to identify a trend, but it does not have the cognitive knowledge like humans to know how to address a situation that it encounters for the first time. Humans are needed to train and test algorithms because they can use their contextual knowledge to quickly identify when outputs are incorrect, while AI has no internal checks-and-balances system. Interestingly, our survey found that 49% of Americans don’t think AI algorithms can successfully operate without a human as part of the testing process.

‘Human-in-the-loop’ is a mix of active learning and machine learning where a diverse set of humans, reflective of society to reduce the potential for bias, are involved in the training and testing stages of developing an algorithm. This practice of uniting machine intelligence with human guidance creates a continuous feedback loop that enables the algorithm to output more accurate results every time. For a successful human-in-the-loop strategy, here are four important stages where humans are needed to produce high-quality results.

  • Data Collection and Creation: To build GenAI algorithms, you need copious amounts of data. Humans play a vital role in the data collection and creation process by ensuring the data that will be fed into the system is high-quality. For data creation, people also play an active role in actually developing the data that informs these algorithms. For example, someone may be tasked with creating data for AI-powered self-driving cars and, to accomplish this, record and transcribe themselves driving specific routes.
  • Data Annotation: Data annotation is the process of labeling information to help machines understand what it is ‘seeing .’During this process, it is crucial to have a diverse community of human data annotators labeling the original data, both input data and the corresponding output, to set a strong foundation for the system.
  • Training: Once the data is correctly labeled, a team of AI specialists input the datasets to start training the algorithm. With this data, the algorithm will start to identify patterns and insights with the dataset so that when later presented with new data, it can produce accurate decisions.  
  • Testing and Quality Assurance: During this stage, humans are focused on correcting results where the algorithm is either not confident in a judgment, outputting incorrect results, or has uncovered an outlier, a data point that is significantly different from other data points in the dataset. This process is known as active learning and plays a crucial role in mitigating inaccurate results the machine produces.

With AI rapidly growing its presence in our world and regulations still not in place, it is imperative that companies regularly evaluate their technologies and processes to ensure they are ethically and responsibly collecting, training, and testing data and that any partners they work with share their commitment in that regard.

Leaders should also take a humanity-in-the-loop approach to their AI development. Humanity-in-the-loop is less of a tactical approach and, instead, is a dedicated effort to ensure AI is being created for the global good of all, which includes mitigating bias, minimizing hallucination risk, and ensuring AI is inclusive and accessible to all in order to not broaden the digital divide. Leaders employing a humanity-in-the-loop approach take a 10,000-foot view of the end-to-end AI continuum and are conscious of their personal role and responsibility to operate ethically and to make values-based decisions that are inclusive of all  – a goal we all should get behind.

Sarha Mavrakis

About Sarha Mavrakis

Sarha Mavrakis is the Global AI Director at TELUS International where she supports its AI Data Solutions function. She has strong expertise in helping organizations build high-quality AI datasets as well as guiding leaders through the ethical and responsible use of AI.

Leave a Reply

Your email address will not be published. Required fields are marked *