fbpx

Innovation Blog

The Truth About ChatGPT

ChatGPT

ChatGPT is an artificial intelligence chatbot developed by OpenAI and released in November 2022. The name “ChatGPT” combines “Chat,” referring to its chatbot functionality, and “GPT,” which stands for Generative Pre-trained Transformer, a type of large language model. (source: Wikipedia)

I promise, this was not written by A.I. You will not get to the end and read, “Har! I fooled you into thinking this was written by a person!” It was funny the first time, but fool me once, shame on you; fool me twice, you’re not going to fool me again. No, this is an internet hot-take blog with a click-bait headline that is uniquely human, but the fact that you thought, even for a moment, that this could be written by artificial intelligence is an interesting reflection on our current technology evolution. (This certainly hits differently today.)  It also brings us to the first way that A.I. content creation like ChatGPT falls short:

Transparency, or ‘Who really wrote this thing?’

Truth be told, this shortfall is on us, the humans. After all, ChatGPT is just a tool. It’s not autonomously creating content. It requires input from a person. So you have to ask yourself: How am I going to use this tool?

In the business world, trying to pass A.I.-generated work off as your own could be problematic. Get caught and you either have to fess up or insist that it was actually your work. Fun side note, as an editor I’ve seen a big increase in the submission of A.I.-generated content. (It’s not hard to spot, but we’ll get to that in a minute.) Every time I’ve asked if they used A.I. to generate the content, they strongly argue against it–sometimes overly so. 

In one case, I recommended that they re-submit the story with a different angle and actual quotes from their product managers or executive management. The response was that they could, but that it would take several months to create. Yep! It would. Actual content creation takes time. That’s totally acceptable. What’s not acceptable is the clutching of personal integrity pearls when someone calls you out on utilizing A.I. to create content.

Of course, complete transparent use of A.I. content creation cuts both ways, as Vanderbilt publicly learned

Again, how you apply this tool and how you communicate its use is up to you. Maybe the question is: Whose trust do you risk losing when applying this tool in a specific instance? 

Speaking of trust, that rolls into our second way A.I. falls flat.

ChatGPT is a liar

A.I. is just that: Artificial. It can do amazing things. It can also do really stupid things. For some reason there’s this tendency to equate A.I. with infallibility. That couldn’t be further from the reality. The neural network models that enable A.I. are trained on content culled from the internet. In practical use, trouble shows up when A.I. content cites a specific source and provides a link to the citation. 

In my experience, the A.I. will cite what sounds like a primary source–the Federal Motor Carrier Safety Administration, for example–on a specific number, like the number of vehicle accidents involving trucks, but the site the citation links to is a secondary source that’s reporting incorrect information or puts it into an incorrect context that the A.I. overlooks. So now you have incorrect data, being passed off as primary source data. That’s a problem.

Salacious headline aside, A.I. content generation platforms aren’t lying on purpose. They’re being trained on and pulling information from an imperfect internet. In some ways, they may even be more honest, or at least ignorant, of their own bias when creating content. Remember: A.I. isn’t infallible. 

It’s technically wrong

A.I.-generated content can make a mess of technical information. For example, if you have it write content about a truck’s rear axle ratio–even from your own previously published content–it can easily confuse the transmission ratio for the axle ratio at such a depth that at first (or even second or third) glance, seems right. Only when you start tearing apart the info and running the axle ratio numbers do you discover its mistake.

If you get caught in the “A.I. is fast and knowledgeable so it must also be correct” trap mentioned above, you could potentially publish extremely incorrect information about truck axle ratios. Woe be the editors who mistakenly publish that information. #OddlySpecific.

The big question

The point of all this isn’t to rage against the A.I. machine. A.I. will be part of our professional and personal experience. The technology will mature and become even more intelligent. It’s important to recognize its limitations and potential to find the best application for it, but even more so, it’s an opportunity to contemplate: What is the value of being human? 

A quick aside: It was once thought that an A.I. could never beat a human at the game Go, a 2,500-year old game that is said to be the most complex game ever created. Beating a human was a long-standing benchmark in A.I. research. 

AlphaGo did it in 2016. 

Then a funny thing happened. The A.I. started to teach the world’s best Go players how to be even better through play. It opened up new ideas and possibilities. It became a sparring partner for Go masters and is regularly used to analyze games by players of all levels.

Maybe we can also learn something from A.I. in content creation. The question we should be asking instead of “What will A.I. replace?” is: What can we do together?  


There is an art to telling a story. Babcox is here to help our customers tell their story. Use our marketing expertise to help you achieve quality assets across all channels and platforms. We’ll connect you with your target audience anytime, anywhere. Let’s talk.


Jason Morgan is the Director of Content for Fleet Equipment and Tire Review

He’s also a self-proclaimed digiphile. So much so that he often gives himself the Voight-Kampff Test. He always helps the tortoise.

Contact Jason at [email protected]

Share This Post

More To Explore