SHARE
Facebook X Pinterest WhatsApp

A Lot of Excitement About ChatGPT, But Be Wary

thumbnail
A Lot of Excitement About ChatGPT, But Be Wary

Poznan, Poland, January 25, 2023: ChatGPT headline news across international media with OpenAI, chat gpt ai bot. Abstract concept of news titles on noise displays. TV glitch effect 3d illustration.

The problem with the content that’s produced by ChatGPT (or other generative AI tools) is it possesses no warranties or accountability whatsoever.

Written By
thumbnail
Joe McKendrick
Joe McKendrick
Mar 21, 2023

By now, you’ve probably heard enough gushing reviews of ChatGPT and other generative AI models. It is undoubtedly a major step toward the democratization of AI, which means you no longer need a Ph.D. in data or computer science, or a multi-million-dollar IT budget to explore its possibilities. At the same time, some industry experts are cautioning against placing too much trust in these tools.

One the plus side, generative AI services are “taking assistive technology to a new level, reducing application development time, and bringing powerful capabilities to nontechnical users,” states a report from McKinsey.

“This latest class of generative AI systems has emerged from foundation models – large-scale, deep learning models trained on massive, broad, unstructured data sets (such as text and images) that cover many topics. Developers can adapt the models for a wide range of use cases, with little fine-tuning required for each task. For example, GPT-3.5, the foundation model underlying ChatGPT, has also been used to translate text, and scientists used an earlier version of GPT to create novel protein sequences. In this way, the power of these capabilities is accessible to all, including developers who lack specialized machine learning skills and, in some cases, people with no technical background. Using foundation models can also reduce the time for developing new AI applications to a level rarely possible before.”

AI users – especially those employing it for enterprise decision-making – need to tread cautiously into this new world, however. There has yet to be accountability for the content and code ChatGPT generates, according to Andy Thurai, analyst with Constellation Research. In an interview with The Cube, “ChatGPT is a new shiny object, but the problem is most of the content that’s produced either by ChatGPT or others is it possesses no warranties or accountability whatsoever.”

See also: Reports of the AI-Assisted Death of Prose are Greatly Exaggerated

The legalities of AI-generated content are murky at this time, with uncertainty about the copyright protections of machine-generated code, and even more uncertainty on who takes responsibility for machine-generated content that leads to harm. Platforms such as ChatGPT also generate software code, which opens up another can of legal worms. “It allows you to produce code, but the problem is with that is while the models are not exactly stolen, they’re created using the GitHub code, and they’re getting sued for that.”

The bottom line, Thurai says, is feel free to employ ChatGPT for personal uses, but not commercial purposes. “You use it either to train or to learn, but in my view it’s not ready for enterprise grade yet.”

See also: Recommender Systems: Why the Future is Real-Time Machine Learning

Plus, the McKinsey authors urge that any output from generative AI needs to be checked and double-checked. “ChatGPT, for example, sometimes hallucinates, meaning it confidently generates entirely inaccurate information in response to a user question and has no built-in mechanism to signal this to the user or challenge the result,” they state. “For example, we have observed instances when the tool was asked to create a short bio and it generated several incorrect facts for the person, such as listing the wrong educational institution. Filters are not yet effective enough to catch inappropriate content.”

The McKinsey report urges executives and managers to assemble a cross-functional team, including data science practitioners, legal experts, and functional business leaders, to think through basic questions, such as selecting targeted use cases, and establishing legal and a community standards.

thumbnail
Joe McKendrick

Joe McKendrick is RTInsights Industry Editor and industry analyst focusing on artificial intelligence, digital, cloud and Big Data topics. His work also appears in Forbes an Harvard Business Review. Over the last three years, he served as co-chair for the AI Summit in New York, as well as on the organizing committee for IEEE's International Conferences on Edge Computing. (full bio). Follow him on Twitter @joemckendrick.

Recommended for you...

The Rise of Autonomous BI: How AI Agents Are Transforming Data Discovery and Analysis
Why the Next Evolution in the C-Suite Is a Chief Data, Analytics, and AI Officer
Digital Twins in 2026: From Digital Replicas to Intelligent, AI-Driven Systems
Real-time Analytics News for the Week Ending December 27

Featured Resources from Cloud Data Insights

The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.