Determining AI or Human: Five Clues to Discern Chatbot Authorship

Determining AI or Human: Five Clues to Discern Chatbot Authorship

In recent years, the internet has been inundated with text generated by AI systems. As these models advance, their ability to mimic human conversation improves. Simultaneously, our techniques for identifying such text have been evolving, and there has been a lively online discussion about the nuances involved.

Historically, ChatGPT, for example, has often added emphasis to its sentences with frequent use of em dashes. This style – as if lengthier, more intense sentences carry greater impact – includes interjected arguments that may seem outdated and mechanical to some yet are perfectly normal for a system trained on a dataset filled with such punctuation.

In response to feedback on ChatGPT's love for em dashes and the desire to adapt models to specific user preferences, OpenAI CEO Sam Altman stated last month that ChatGPT would cease using extensive dashes in its responses if requested. Many users likely welcomed this change, although it complicates the task of those needing to distinguish between human and AI-generated text.

Fortunately, numerous online tools exist to assist with this task. Websites such as these allow users to input text for analysis, searching for signs of AI design. While not infallible, they are generally reliable in catching evident signs.

Five Indicators of AI-Generated Text

If you prefer not to involve more online tools or simply wish to sharpen your detection skills, certain linguistic signals can help. Here are five unmistakable signs of text generated by AI:

Human authors typically craft their arguments with three examples, following the old adage that one instance is accidental, two occurrences might be a coincidence, while three signify a pattern. AI commonly replicates this rule excessively, often presenting triplets in its text. For instance, consider an example where ChatGPT attempts to argue that Earth is flat, purely as a rhetorical exercise.

It's not uncommon for chatbots to reinforce their assertions by first introducing an opposing viewpoint. Consider an interaction with ChatGPT, where it was prompted to create a marketing message for Martian tourism: 'Mars isn't just a planet; it's your next unforgettable adventure.' Such phrases are unlikely to occur to a human writer.

Another hallmark is consistency in sentence length, leading to paragraphs feeling somewhat rigid. Human writers usually strive to mix up sentence lengths for more dynamic prose. Reading a piece aloud may reveal if it sounds unnaturally mechanical—a sign of an AI origin.

While AI sentences vary in length, they often incorporate brief, seemingly random questions—such as 'And honestly?'—which can disrupt flow without context. When describing the Rockies, ChatGPT quipped: 'Wildlife? Oh, they're just casually judging your snack choices.' A human writer would more likely have said: 'The wildlife watches your snack choices.'

Lastly, AI tends to use vague language and qualifiers like 'This could mean…' or 'perhaps…', trying to sound balanced but sometimes becoming indecisive.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts