Generative AI: When Reality is Uncertain, Authenticity is Key

PinIt

As enterprises seek to unlock the opportunities of synthetic reality created by generative AI, building trust will be critical.

AI is already a necessity for businesses as they look to make sense of huge volumes of data to drive better business outcomes. But as enterprises push AI into more collaborative and creative areas, they are – purposely or not – becoming architects of the “unreal” world. Synthetic realness, where generative AI output convincingly reflects the physical world, is everywhere from chatbots used to connect with brands to deepfakes used as disinformation. And as related technologies proliferate, the lines between reality and unreality will continue to blur.

For example, many people became aware of the power of deepfakes when CBS News’ “60 Minutes” and other news outlets reported on some very realistic-looking videos purportedly showing the actor Tom Cruise playing guitar. While they fooled many, including Justin Bieber, the videos were not actually the actor but instead the creation of a Belgian visual effects artist.

Unreal data, real benefits

As discussed in the Accenture Technology Vision 2022 report, advances in generative AI are driving the creation and use of synthetic data – datasets that are incredibly realistic but, ultimately, not rooted in reality. But that is not to say that synthetic data is always a red herring; in fact, unreal data can be shared, maintaining statistical usefulness while protecting confidentiality and privacy.

With the onset of the unreal, the simplistic formula of “real = good, fake = bad” unravels. Every case where AI is used to bamboozle, or fool, is matched by a positive case where its very unreality is a benefit. For instance, synthetic data can also be manipulated to increase diversity in datasets, thereby helping to counter historical biases that undermine many real-world datasets.

See also: Reports of the AI-Assisted Death of Prose are Greatly Exaggerated

A question of trust

As enterprises seek to unlock the opportunities of synthetic reality, building trust will be critical – currently, 99% of executives surveyed report concern over deepfakes and/or disinformation attacks. This is likely because malicious deepfakes and disinformation have already damaged the reputation of synthetic AI, and as a result, confidence in broader AI applications is also shaky. For instance, false news online tends to diffuse faster and further than the truth, to some extent, due to the way social media algorithms prioritize engagement. Join the convincing capabilities of generative AI with social media algorithms that prioritize engagement, and you have a real problem on your hands.

The biggest threat to enterprises obtaining the advantages of the synthetic world is the actors who use it maliciously. Phishing attacks are a case in point. Businesses rely in no small part on employee training and awareness to avoid falling victim to phishing attacks, something that will become much more difficult when a threat actor can train an AI model on CEO emails to generate text that sounds precisely like them. A scammer could even convincingly replicate a business’ brand with the right tone, images, and social media presence. If customers are duped, they’ll blame the business. In the world of the unreal, enterprises’ entire reputations are at stake.

See also: OpenAI Launches AI Dialogue Model ChatGPT

The power of authenticity

How can enterprises use generative AI in a way that overcomes these challenges?

There are – and will be – profound questions to answer as synthetic AI becomes more widespread. Just take the controversy surrounding “Roadrunner,” a documentary about the late Anthony Bourdain. It uses an AI voice, trained on Bourdain recordings, to read passages that he had written but never spoken. The filmmakers could not receive his consent, and they did not disclose the use of an AI voice in the movie. Was this ethical?

These sorts of questions will differ by company, use case, and regulations. However, for businesses that are building up their unreal capabilities, it’s time to start asking them. And it is our belief that, at least in part, the answers lie in authenticity. While synthetic realness can sow distrust and discord, it also has the power to improve human relationships. As one Yale study found, the performance of a group comprised of human and robot members improved when the robots were given human traits such as wit and imperfection.

The approach

Being real should not be the overriding goal, as sometimes the unreal delivers beneficial outcomes. Companies should instead set their North Star around the aforementioned concept of authenticity: being genuine in a way that others can attest to. That means using generative AI in a way that takes heed of provenance, policy, people, and purpose. By abiding by these four tenets, businesses will gain confidence in their decisions to trust others and use generative AI in such a way that others trust them:

  • Establishing provenance will be critical as businesses increasingly deal with deepfakes and disinformation – and in enabling others to establish provenance as they interact with your business. Distributed ledger technology (DLT) will be a key technology in this respect. For example, Project Origin is tackling the spread of disinformation using DLT to establish provenance from publishing to presentation.
  • Organizations must take stock of the policies that they’re required to comply with. For instance, the EU has drafted legislation to regulate “trustworthy AI” with the purpose of protecting the rights of citizens. Much of this space is yet to be defined, so where there isn’t guidance, businesses will need to define their own policies based on industry, products, customers, and values. If they’re proactive in sharing what works and what doesn’t, those businesses can (and likely will) be involved in shaping the future of the unreal world.
  • When it comes to people, having clear governance structures in place is imperative. For example, which committees are drafting internal policies? Which departments are using synthetic data or content in the company, and who will be held accountable if privacy is compromised or customers feel duped? Who will be the point person responsible if your company falls prey to a deepfake or disinformation attack?
  • Finally, businesses must define the purpose behind the use of synthetic content, its advantage over non-synthetic content, and the key metrics that can attest to it. For example, if the purpose of using synthetic data in a model is to insert counter-bias, thereby improving model outcomes, then it could be an authentic use of generative AI.

Synthetic reality is coming, whether we like it or not. The challenge now is to ensure the benefits are maximized and negative effects minimized. Authenticity is central to meeting this aim, providing a map through this complex arena, and helping companies use generative AI in a way that adds value. Authenticity alone has the power to unlock new attitudes towards and experiences with AI, unleashing the benefits of the unreal world in full.

Michael Biltz

About Michael Biltz

Michael Biltz, Managing Director of Accenture Technology Vision, leads the enterprise's annual technology visioning process. Through Accenture's Technology Vision, Michael defines Accenture's perspective on the future of technology beyond the current conversations about the IoT, social, cloud, mobility, and big data to focus on how technology will impact the way we work and live.

Leave a Reply

Your email address will not be published. Required fields are marked *