Community News

“You are not fooling anyone; I know you used ChatGPT!”

Published

on

BY SIMONE J. SMITH

I must admit, I knew that when ChatGPT was introduced to the world in late 2022, the way we interact with text would change, and I was not impressed with it at all.  I knew that I could no longer trust everything I read. I would always have to question, “Was this even written by humans?”  “How can we be sure that what we’re reading is the product of human thought and not simply words strung together by an algorithm?”

You can ask AI programs like ChatGPT to write something—anything—and within seconds, it delivers. For many people, this is troubling. Most people I speak with will share their unease about artificial intelligence, with a common sentiment that people don’t want what they consume to be “thoughtlessly” generated by machines. Yet, despite the side eye, AI has quickly been adopted by many for its ability to generate realistic text—sometimes for the better, but often in ways that raise ethical concerns.

So, how do these AI systems work? I am sure that many of you have been hearing this term, large language models (LLMs), which are deep-learning algorithms trained on massive data sets, specifically sets of text. When you ask ChatGPT to write something, it doesn’t “think” in the way you think it does, but rather, it breaks down the question, identifies key elements, and predicts the most appropriate sequence of words to respond, based on its understanding of word relationships. The more powerful the model is, the better it is at understanding context and providing responses that feel natural.

Yes, these models have become more sophisticated, but as an editor and writer, I can share tell-tale signs of AI-generated text. Some systems use custom instructions to refine responses and mask the artificial nature of the text, but no matter how advanced the AI, its responses are ultimately shaped by its training. This means there are often patterns, or nuances that reveal the origin of the text as being from a machine, not the spectacular human mind.

Chatbots have been trained to look for the relationships between words, and they tend to use certain words and phrases more often than a person would. There is no specific list of words and phrases that serve as red flags, but I have used ChatGPT enough that I have started to pick up on them. I am going to share some of them with you, and hopefully at the end of this, my fellow writers and readers can share some of your observations with me.

ChatGPT frequently uses the word “delve,” especially during transitions in writing. (e.g., “Let’s delve into its meaning.”) Similarly, you may see repeated uses of words like: “emerge,” “relentless,” and “groundbreaking.” In particular, when ChatGPT is describing a collection of something, it will often call it a “mosaic” or a “tapestry.” (e.g. “Trinidad’s cultural landscape is a vibrant mosaic.”)

The city it’s writing about is often “integral,” “vibrant,” and a “cornerstone” of the country it’s in. Also, if I see the word “beacon” one more time, I think I am going to lose my mind.

As technology continues to evolve, so are the tools designed to detect it. AI detectors like ZeroGPT are becoming increasingly sophisticated, capable of identifying patterns and styles that suggest human, or AI authorship. This means that it’s more important than ever to be transparent about your use of AI, and to develop skills that distinguish your unique voice from machine-generated content

The rise of large language models in writing has me thinking more critically about what we are consuming as a society. As AI continues to evolve, so must our ability to discern between human creativity and machine-generated content.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version