Reports of the AI-Assisted Death of Prose are Greatly Exaggerated


Thanks to ChatGPT, the genie is out of the bottle with AI-assisted content development. But businesses and individuals would be wise to proceed with caution.

Yahoo! Finance recently touted an industry expert’s prediction that 90% of online content could be generated by AI by 2025. Don’t count on it.

I put proclamations like this into the same category as the paperless office (which never arrived), the long-predicted death of the mainframe, and experts predicting for 20+ years the end of Moore’s Law.

Let’s look at each of these, one by one, before focusing on AI-generated content and why such content is not going to be as pervasive as many predict it will be.

The advent of email was supposed to eliminate vast amounts of paper. Letters would no longer be typed and mailed, and paper-based intra-office communications that used to be shared with those envelopes that had hundreds of lines for routing the content to the next person would all disappear. Documents and forms of all types would be scanned or created digitally and efficiently routed in optimized workflows.

Yeah, right. Have you recently created a will, applied for a driver’s license, or been to a doctor’s office lately? For Pete’s sake, it took until late last summer before the U.S. Department of State began a pilot project to finally accept passport renewals online. I just went for my annual eye exam and had to fill out six forms. This is for a doctor I have seen each year for the last six years. And once I completed the forms, the office assistant made a paper photocopy of my insurance card and driver’s license. 

What about mainframes? Many of the most competitive industries, including financial services, retail, travel and hospitality, healthcare, and more, still rely on mainframes. Seventy-one percent of Fortune 500 companies use IBM Z systems. They handle 90% of all credit card transactions. And even up to 2020, 44 of the top 50 banks in the world use IBM Z systems.

With respect to Moore’s Law…I went to the annual SC (supercomputing) Conference for about five years in a row, starting in 2003. At one of those events, the end of Moore’s Law came up, and one of the speakers said he had a corollary: The number of computer scientists saying Moore’s Law is dead doubles every two years. That was 20 years ago. 

See also: OpenAI Launches AI Dialogue Model ChatGPT

ChatGPT: Proceed with caution

With ChatGPT letting the cat out of the bag, there is no doubt that AI-assisted content development will have some uses. However, there are already indications ChatGPT and similar technologies will not be as widely embraced as many think. And these issues are being raised just months after ChatGPT’s introduction. Some of the issues that are emerging that will likely limit the use of AI-generated content include:

Subject expertise is still needed: Businesses cannot just run off tons of content and post it online without it being vetted. There needs to be in-house expertise to review the content for accuracy. CNET just did an experiment publishing about 75 articles. In discussing the effort, Connie Guglielmo, the site’s Editor-in-Chief, noted that they tried “AI assist on basic explainers around financial services topics like What Is Compound Interest? and How to Cash a Check Without a Bank Account.” And even when used on such elementary topics, she noted that to maintain the site’s reputation as a fact-based, unbiased source of news and advice, each article was “reviewed, fact-checked, and edited by an editor with topical expertise before we hit publish.” 

The old garbage in, garbage out adage applies: TheRegister recently wrote an incredibly damming article about blindly accepting the output of AI-generated tools. It noted that there should be consideration given to the quality of the existing online content used to generate new content.

The article brought up the old Internet of Sh…T discussion: “The problem is that these AI-generated articles have to get their information from somewhere in enough volume to suitably churn out new info clones cloaked in slightly more eloquent language. And where do AI training algorithms get all of this? From the IoS, of course.”

Pushback against using the tools has already started: Schools and universities are greatly concerned that students will hand in reports and papers created with generative tools. New York City public schools have already restricted access to ChatGPT on school networks and devices. (Certainly, such efforts do not block a student from using the tool on their own network and device.) And qualms about ChatGPT’s use have been raised in some universities.

The main concerns are about the negative impact such tools have on student learning and the accuracy of the content generated. The same issues apply to business use of the technology. Within this context, many believe such tools could and should be used as just one of many resources a writer relies on to produce content.

Baffle them with BS: One of the most common complaints (or perhaps critiques) about AI-generated content is that the output is full of effusive prose when more concise text would work. There is often repetition within an article. And many have raised the issue that the output presents information in an authoritative tone based on nothing…some are calling this the confidence trick.   

IP, copyright, and plagiarism issues: If you generate something using ChatGPT and post it, can someone else use it without permission? What if the content used to generate text is protected? Where is the line between aggregating and summarizing existing content and plagiarism? There are so many issues that have not yet been addressed that must be.  

Concern about an increase in AI plagiarism drove Edward Tian, a senior at Princeton University, to build an app to detect whether a text is written by ChatGPT. As NPR reported, many teachers have reached out to Tian since he released his bot.

A final word

ChatGPT has only been available to the public for a short time, and its impact is profound. With more generative AI tools in the works, there is no question the technology will be widely used.

However, anyone, any business making use of such technology, better know the issues before opening the spigot and putting their content generation on auto-pilot.

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *