In today’s column, I am going to mindfully tackle a controversial topic about generative AI that will somewhat extraordinarily turn the heated matter on its head. The result is nearly comical in some ways but avidly reveals sobering considerations of quite useful value. Hang in there, this will all be worth it. For my ongoing readers, this discussion is an ongoing extension of my coverage of the latest AI trends especially with an undercurrent entailing AI Ethics and AI Law, see the link here.

The crux of this analysis has to do with the now classic question of distinguishing human-written text from text that was produced by generative AI.

It’s a big issue.

A really, really, big issue.

Let’s unpack it.

The Trouble With Generative AI As Your Ghostwriter

We normally hear about the problem of using generative as your ghostwriter in the context of schools and coursework. A student opts to use generative AI to write their essay for them. The student sneakily turns in the paper, hoping the teacher will not be the wiser. Sometimes the student manages to get away with it, scot-free. Sometimes the student gets caught. It all depends.

This is not solely a problem within a school-related context. A person at work is assigned the task of writing an important report by their boss. Rather than writing the report by hand, the person instead has generative AI write the entire report for them. They give the report to their boss. If the person is lucky, they don’t get caught and avoid potentially being fired for their transgression.

The work setting does seem a bit different than the school setting. In school, the student is supposed to be learning and ergo the presumption is that the student is undercutting their own education by using generative AI to do their schoolwork for them. We don’t want students to essentially cheat on their education.

As an aside, there is rampant confusion on this matter, namely, you can, in fact, have students suitably use generative AI in the right way at the right time, and therefore exhortations about potentially banning generative AI outrightly from schools are short-sighted, untenable, and a regrettably foolhardy approach though it seems to keep coming up, again and again. See my discussion on this whole conundrum and the sensible way to do things, at the link here and the link here, among others of my postings.

Back to the use case in the workplace. The work-related transgression is perhaps more so that the worker failed to alert their boss that they were seeking to write the report via generative AI. The boss might have approved of such an approach. Furthermore, another issue could be that the worker is unaware of the need to double-check the generative AI after the report has been generated. This is an essential act.

Here’s why double-checking is crucial. I’ve covered the now famous or infamous case of the two lawyers who used generative AI to write a legal briefing, which got them in hot water by a judge when the briefing turned out to have falsehoods produced by the generative AI (the lawyers had not dutifully double-checked). See my analysis of the headline-grabbing incident at the link here and the link here.

Generative AI readily produces fictitious output that appears to be factual, a facet commonly referred to as AI hallucinations or said-to-be falsehoods, see the link here. Please note that I disfavor the catchphrase of AI hallucinations since it anthropomorphizes AI. In any case, the phrase has gained popularity, and the essence is apt that you must be wary of all essays and outputs arising from generative AI.

All in all, people of any age and in all walks of life might readily opt to use generative AI to do their writing for them. In the wrong circumstances, this is considered a no-no. In other circumstances, the act of using generative AI as your authoring tool might be acceptable, assuming that you are aboveboard about it. For example, many news outlets and journals require that works authored by generative AI are to be explicitly labeled as such.

The ethics and legal blurring of this is underway such that not everyone is upfront and at times hides the authorship to prevent you from realizing what is afoot. A news outlet might for example assign a fake name that represents their use of generative AI. If you later ask whether the item was written by generative AI, they might say that yes, of course, everyone knows that the fake name is an obvious reference to the AI.

Wink-wink attempts at plausible deniability ensue.

Many legal and ethics twists and turns arise, which, in case you are interested, I cover in-depth at the link here.

Modes Of Writing And The Unreliable Eyeball Detection Method

Because of the concerns that people sometimes opt to have generative AI be their ghostwriter, there is an entire cottage industry now of automated detection tools that purportedly detect whether content has been devised by the human hand versus via generative AI. There are a lot of myths about the capabilities of these detection tools, oftentimes pushed as a silver bullet by the industrial complex and others that want to make you believe that the tools are the height of perfection.

I will be sharing with you the ugly underbelly of these at-times scams or charades.

Before we can get into that aspect, we need to explore what I refer to as the eyeball detection method. I’ll first unpack the role of human eyeballing of content to assess whether the content was human-written or devised by generative AI. No automated tools are invoked, and instead, a person examines an essay and decides who or what the author might be.

Let’s start at the beginning, namely that content can be written in these two ways:

  • (1) Human-written content (HWC).
  • (2) Generative AI generated content (AIGC).

Some proclaim they can easily determine which is which by purely manual inspection alone.

For example, sadly, there are teachers who brazenly believe they can discern whether an essay was written by a student versus via the use of generative AI. The teacher will look for clues such as vocabulary words that seem outside the range of the student. After reading the essay, such teachers will insist that they can identify who or what wrote the content.

Unfortunately, those emboldened teachers are often wrong in their guesses.

A research study entitled “Do Teachers Spot AI? Evaluation The Detectability Of AI-Generated Texts Among Student Essays” by Johanna Fleckenstein, Jennifer Meyer, Thorben Jansen, Stefan D. Keller, Olaf Köller, and Jens Möller, Computers And Education: Artificial Intelligence, June 2024, made these important points (excerpts):

  • “Generative AI can simulate student essay writing in a way that is undetectable for teachers.”
  • “Teachers are overconfident in their source identification.”
  • “AI-generated essays tend to be assessed more positively than student-written texts.”
  • “In summary, the finding that teachers cannot differentiate between student-written texts and AI-generated texts underscores the need for a thoughtful and ethical integration of AI in education. It calls for a reevaluation of assessment practices, increased awareness of AI’s capabilities and limitations, and a focus on student skills that AI cannot easily replace.”

Teachers can readily be fooled.

The crux usually to the teacher being fooled is that an astute student using generative AI can simply instruct the AI to write in the same manner as they do. The resulting essay appears as if it is student-written and can trick unknowing teachers into falsely believing the student wrote the content. This would be considered a false negative in experimental lingo (the teacher failed to discern that an essay was written by generative AI and ergo gives credit undeservedly to a student).

I’d like to augment the two ways of writing content by noting that the AIGC has an additional two subcomponents (labeled here as 2a and 2b):

  • (1) Human-written content (HWC).
  • (2) Generative AI generated content (AIGC).
  • 2a. Default prompt. Tell generative AI to write as generative AI conventionally does (the default).
  • 2b. Directive prompt. Tell generative AI to write as a human would write (invoke a persona, see my coverage at the link here and the link here).

If the person using generative AI to do their ghostwriting uses a conventional prompt, the AI will use whatever default wording is customarily being used in that AI app. On the other hand, if the person realizes they want the content to seem less likely AI-written, they can merely indicate to the AI to mimic the style of say a high school student, a college student, a middle school student, or whichever form of a person or persona they wish to invoke.

Now that I’ve got you in the mental sphere that it is readily possible and extraordinarily easy to instruct generative AI to write in various fashions or styles, let’s keep going on that tack.

Suppose that we opt to use generative AI as a rewriting tool. The idea is that we will take content and ask the AI to rewrite the material. This could be a conniving means of disguising the original writer or source of the content.

Consider these two possibilities:

  • (i) Tell generative AI to rewrite a generative AI essay to look like it was human-written.
  • (ii) Tell generative AI to rewrite a human-written essay to look like it was written by generative AI.

The first of those two would be the use case of someone who has produced a generative AI essay, realizes they will get nabbed if they turn it in, and decides to just go ahead and have the AI rewrite it to appear as if human-written. This is similar to the use case of having the AI write content from scratch and telling the AI to make it look human-written. This is a step afterward if you perchance forgot to tell the AI to write it that way at the get-go.

The second of those two use cases is a bit strange to conceive of. The mind-bending idea is that someone might tell generative AI to rewrite a human-written essay and make it look as though it was written by generative AI. I’ll get more into this momentarily, so go ahead and ponder this intriguing notion for a bit, thanks.

There are more twists and turns to be had in all of this.

One twist entails the act of making modifications to written content.

We ought to include these distinct possibilities:

  • (I) Human-written content with generative AI modifications made (have AI clean-up for you).
  • (II) Generative AI written content with manual human modifications made (make it seem more human-like).

In essence, the content becomes a blend of both human-written and generative AI material.

A lot of people don’t seem to realize that this is yet another easy variation within this whole concoction. You might be inspecting content that is partially written by human hand and partially written by generative AI.

The first instance entails the act of having generative AI polish or otherwise modify a human-written essay. To some degree, everyday word processing packages tend to do this already. A person can have word processing provide advice on spelling, replacement of words for other words, and so on.

A full-on use of generative AI is somewhat larger in scope and can essentially demonstrably modify the initial content. A teacher can find themselves accusing a student of using generative AI to write the essay due to seemingly telltale clues that generative AI was involved in the process. Do you think it is fair that if the student merely had generative AI do some minor polishing, akin to what happens in common word processing packages, the student is to be dinged for doing so?

That’s why teachers and schools need to establish clear rules about what is permissible and what is considered out of bounds, see my discussion at the link here.

The second instance is the sneakier of the two, maybe. For example, a student produces an essay via generative AI. The student examines the essay and discerns that despite the AI mimicking a human writing style, there are portions that still might stick out like a sore thumb. The student proceeds to change those portions by hand, perhaps using simpler words or making the sentences less semantically complex. Voila, the otherwise generative AI essay now looks even more so as though it was student-written.

I want to make sure you can plainly see how confounding all of this is.

Let’s add two more subcomponents to the structure of things, doing so as 1a and 1b:

  • (1) Human-written content (HWC).
  • 1a. Rewrite AI. Human rewrites a generative AI essay to look like it was human-written.
  • 1b. Write like AI. Human writes or rewrites a human-written essay to look like it was written by generative AI.
  • (2) Generative AI generated content (AIGC).
  • 2a. Default prompt. Tell generative AI to write as generative AI conventionally does (the default).
  • 2b. Directive prompt. Tell generative AI to write as a human would write (invoke a persona, see my coverage at the link here and the link here).

I’ve given you above an example of the 1a instance, namely that a person might rewrite a generative AI essay so that the essay appears to have been human-written.

The second instance or 1b seems oddly curious. A person decides to write an essay that looks as though it is written by generative AI. Why would anyone do this? The overarching assumption is that no one wants their writing to look as though it was done by AI. You want credit for your own writing. If you write as though the writing smacks of AI, people are going to start pointing fingers at you and denounce you for cheating.

Buckle up for a surprise about the matter.

Are you ready?

I’d like to reword 1b as follows: “Human writes a human-written essay that looks like it was written by generative AI.” I removed the intent.

Here’s the rub.

Suppose that someone writes in their normal way of writing, and they aren’t intending to appear to write as generative AI does. So, I am saying they are just writing as they tend to write. Period. But someone else who reads the content falsely believes the essay was written by AI. They are triggered by what they believe to be AI-powered writing, even though this didn’t at all occur in this instance.

This is the dreaded “false positive” that students and others are often faced with. I get emails and responses from readers that they have been falsely accused of using generative AI. Their scholastic pursuits can be totally smeared by this. It is very hard to fight such an accusation, assuming that it is indeed a false case. How do you “prove” that you didn’t use generative AI?

The problem is muddled since there are real instances of people who cheat and use generative AI, while there are false-claimed instances of people having used generative AI when they did not. Society right now assumes that a student or person couldn’t write as well as generative AI, thus, any good writing that seems out of place is instantly labeled as a generative AI essay.

Let’s not be so quick to judge.

Part of the reason why someone might write like generative AI is that more and more people are using generative AI on a regular basis. The thing is that your writing can start to mimic the writing of generative AI.

Say what?

Yes, after you see a slew of generative AI essays, it is possible to start picking up the patterns associated with generative AI writing.

I realize this seems upside down, or maybe it is the completion of a somewhat bendy cycle. We have generative AI that is based on patterns of how humans write. Next thing you know, we have humans patterning how generative AI writes. But that is sensible in the manner of admiring the writing of generative AI that in a gestalt way has mathematically and computationally figured out the “best” way to write as per a massive examination of how humans write.

Dizzying.

Detection Tools Tend To Be Misleadingly Portrayed

If eyeballing an essay won’t necessarily be reliable in identifying human writing versus generative AI, headlines boldly urge you to use automated detection tools.

Turns out a lot of these detection tools are either hogwash or principally smoke-and-mirrors, but most people don’t realize they need to be careful and understand what the tools can attain. You should scrutinize the likelihood of false positives and false negatives. At times, the vendors treat those factors as low-key and bury the details in their licensing agreements.

A common and upsetting approach is that a school or entity will decide to adopt a detection tool and act as though it is a perfect detection device. Little if any qualms are expressed. The users such as teachers are not given any heads-up about the downsides. Just go ahead and run essays through the tool and proceed to believe without hesitation whatever the tool spits out.

It is tempting to fall into that trap. The detection tools are presumed to be a labor savings mechanism. No need to eyeball an essay. Run it through the tool. See what the tool says and proceed accordingly. No fuss, no muss.

Why aren’t the detection tools able to be fully accurate?

For the very same reasons that I cited above about eyeballing of essays, which, you saw is not an ironclad means of making these assessments.

The tools tend to rely upon spotting the use of words and the phrasing of sentences that are frequently used by generative AI apps. A computational and mathematical analysis of an essay is undertaken. The writing is compared to what generative AI tends to produce.

In a recent research paper on trying to use AI to detect AI-based writing, entitled “Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews” by Weixin Liang, Zachary Izzo, Yaohui Zhang, Haley Lepp, Hancheng Cao, Xuandong Zhao, Lingjiao Chen, Haotian Ye, Sheng Liu, Zhi Huang, Daniel A. McFarland, James Y. Zou, arXiv, March 11, 2024, the researchers made these points (excerpts):

  • “Human capability to discern AI-generated text from human-written content barely exceeds that of a random classifier, heightening the risk that unsubstantiated generated text can masquerade as authoritative, evidence-based writing.”
  • “We present an approach for estimating the fraction of text in a large corpus which is likely to be substantially modified or produced by a large language model (LLM).”
  • “Our maximum likelihood model leverages expert-written and AI-generated reference texts to accurately and efficiently examine real-world LLM-use at the corpus level.”

I bring up this study to highlight how there are various words more likely to be found in generative AI writing and therefore are presumed to be a means to detect that an essay might be AI-generated.

This study, akin to others of a similar ilk, identified for example various adjectives and adverbs that tend to be used with a heightened frequency by generative AI apps. The paper lists some Top 100 sets of words, which I thought you might like to see a sample of them.

A sampling of the identified Top 100 adjectives disproportionately used by AI:

  • “Commendable, innovative, meticulous, intricate, notable, versatile, noteworthy, invaluable, pivotal, potent, fresh, ingenious, cogent, ongoing, tangible, profound, methodical, laudable, lucid, appreciable, fascinating, adaptable, admirable, refreshing, proficient, intriguing, thoughtful, credible, exceptional, etc.”

A sampling of the identified Top 100 adverbs disproportionately used by AI:

  • “Meticulously, reportedly, lucidly, innovatively, aptly, methodically, excellently, compellingly, impressively, undoubtedly, scholarly, strategically, intriguingly, competently, intelligently, hitherto, thoughtfully, profoundly, undeniably, admirably, creatively, logically, markedly, thereby, contextually, distinctly, judiciously, cleverly, invariably, etc.

Ponder those words, just for a few seconds or so.

Remember that I mentioned that you can tell generative AI to write as though a human is writing an essay?

If you give that kind of a prompt, one thing the generative AI can potentially do is aim to not use those Top 100 words or at least use them rather judiciously. By doing so, any detection tool that assumes those words are a likely basis for determining whether the content is written by generative AI will be possibly fooled. The counts of words that are in the Top 100 will be low, thus the program will likely give a lowered estimate of the content being written by AI.

All of this is a continual cat-and-mouse gambit.

Generative AI gets better at writing as though something is human-written. Imagine that the detection tools are behind the times and the essays are rated as more likely to be human-written than AI written. The vendors of the detection tools upgrade their wares. The odds of detecting AI writing go up. Meanwhile, the generative AI gets better at downplaying the AI writing style. The tools are again less able to make detections.

Round and round this goes.

A recent analysis of detection methods called out the difficulty associated with this ongoing cat-and-mouse gambit, as noted in a research paper entitled “Hiding the Ghostwriters: An Adversarial Evaluation of AI-Generated Student Essay Detection” by Xinlin Peng, Ying Zhou†, Ben He, Le Sun, and Yingfei Sun, arXiv, February 1, 2024, per these salient points (excerpts):

  • “Large language models (LLMs) have exhibited remarkable capabilities in text generation tasks.”
  • “However, the utilization of these models carries inherent risks, including but not limited to plagiarism, the dissemination of fake news, and issues in educational exercises.”
  • “Efforts have been made to address these concerns by developing detectors for identifying AI generated content (AIGC).”
  • “AIGC detection attack refers to deceiving or evading AIGC detection through carefully crafted prompts or paraphrasing attacks, which aims to exploit vulnerabilities or limitations in the detection algorithms to evade detection or mislead the system into classifying AI-generated content as human-generated or vice versa.”
  • “The results highlight the inherent vulnerabilities in existing detection methods and emphasize the need for the development of more resilient and precise approaches.”

One other final point before we move on.

You might be scratching your head and wondering if we could watermark essays that are produced by generative AI.

Those of you old enough to remember would recall that a classic technique for pictures, images, and the like would be to embed a watermark into the material. Then, if there was a dispute about the owner or author, it was possible to dig into the content to see if there was a watermark there.

The use of watermarks for digital content involving graphics or images is somewhat tenable because it is relatively easier to embed a watermark. They are generally hard for others to find and remove or find and change. Once again, this is a continual cat-and-mouse gambit. New methods of digital watermarking are figured out, and those who wish to defeat them are able to figure out ways to do so. On and on this loop occurs.

Watermarking of text is a lot tougher proposition.

Note that when you watermark a graphic item, this is usually done in a hidden manner, deep within the digital guts and bits and bytes of the item. There is essentially no difference from the normal eye of what the image or picture seems to be. Text is a different beast.

To watermark text, you typically need to seed certain words or phrases into the essay. A person reading the essay will presumably read the text in an ordinary fashion and expect that the text is entirely about whatever topic is at hand. To keep them from figuring out that there is a watermark here or there, the text must be carefully calculated to fit within the context of the text. You can’t just drop into the middle of an essay some bizarre sentence or plop a few oddball words as an attempt at watermarking the content. If you do so, the gig is up. I’ve covered these tactics at the link here.

Suppose that a watermarking scheme for text-based content was nearly invisible in the sense that the essay was almost semantically the same as if it didn’t contain the watermark. Yay, watermark success.

Maybe not.

The biggest downfall is that a person can take an essay and opt to modify or rewrite the text, and doing so could readily mess up the watermarks. They might not have any idea what portion of the text a watermark is. Nonetheless, by moving around words, substituting words, adding words, and so on, they are likely to at some point be scrambling the watermark.

Digital watermarking and generative AI are a trending and rapidly evolving topic.

There are pressures underway to legally require AI makers to include watermarking as part and parcel of their generative AI apps, see my coverage at the link here. The belief is that this will aid in distinguishing what is produced by generative AI versus by the human hand. Regrettably, oftentimes lawmakers and regulators aren’t aware of how readily these can be defeated, especially in the case of text-based outputs. The result is that a strident push for watermarks can occur that is based on an outsized belief of what this will actually accomplish.

The viewpoint of some is that even if watermarks can be tricked, at least it is better to have them legally required versus not having them. For my in-depth analysis of the upsides and downsides, see the link here.

When You Want To Make Your Writing Look Like Generative AI

I have walked you through the whole kit and kaboodle about generative AI content versus by-hand content. My hope is that the quick tour was instructive and engaging. As your reward for having gone on that arduous journey, let’s do something that very few talk about.

First, grab a glass of fine wine and ready yourself for some useful fun.

Suppose you wanted to compose an essay that seemed to be written by generative AI.

Could you do so?

Yes, I would say you most certainly can.

All you need to do is mimic generative AI. I’m not saying that you need to somehow dive into the mathematical and computational formulations of generative AI. Nope, no need to do so. What you need to do is study the essays generated by AI. Mimic those essays and the writing style being used.

We are turning the world on its head. The rest of the world is focused on not writing like generative AI. You are supposed to write as a human would write. If you write like generative AI, you will be accused of having used generative AI.

Furthermore, if you go onto social media and online forums, and if you write in the style of generative AI, the odds are that you will be accused of being generative AI. Tons of AI chatbots are spewing content onto online arenas. Nobody can be sure if posted content is by a human being versus AI.

Be cautious because once you are labeled by the wisdom of the crowd as being generative AI, you might have a devil of a time proving otherwise. No matter what pictures or videos you provide to show proof of humanness will seemingly further convince others that you aren’t an entirely AI-contrived avatar. You will live your life cursed as being a generative AI.

Just wanted to give you a heads-up and sufficient warning.

You’ve been warned, proceed with caution.

Let’s put our heads together and figure out what to do if wanting to write like generative AI. I will ask that you play fair. An unfair method would be to have generative AI merely produce an essay for you. Thus, you didn’t lift a finger to write the essay. Cheater. Either go big or go home.

We are going to write from scratch an essay that would seem to be written by generative AI. The goal is that a person reading the essay will get their Spidey-sense tingling that the essay is written by AI. On top of this, the aim is that any of the detection tools would be fooled into scoring the essay as likely written by generative AI.

Here’s how we will proceed in terms of steps to undertake:

  • (1) Use well-known lines about being generative AI.
  • (2) Use words that are commonly used in generative AI essays.
  • (3) Use sentences that are worded as per generative AI essays.
  • (4) Tend toward being politically correct in the writing.
  • (5) Be verbose when the circumstances might not warrant doing so.
  • (6) Be terse when the circumstances might not warrant doing so.
  • (7) Do not use vernacular or curse words.
  • (8) Include happy and over-the-top or giddy comments.
  • (9) Do not be downtrodden or gloomy.
  • (10) Additional approaches

Let’s briefly try this.

First, I will share with you one of the most outrageous ways to accomplish this that in the right situation can be nearly surefire for fooling a human. It is done with immense ease.

Some of the generative AI apps such as ChatGPT tend to use this now classic line: “Please note that as an AI language model, I am …”. For example, you might enter a prompt into generative AI telling the AI to start crying, and the response often is something like this: “Please note that as an AI language model, I am unable to cry.”

You can write an essay by hand and include that line.

The person reading the essay is essentially clobbered over the head that the essay is written by generative AI. They would have to be entirely out to lunch to otherwise not assume that the essay was written by AI. Boom, drop the mic.

If you want to keep their suspicion somewhat low that they are maybe being tricked, make sure to include the line further into the essay rather than at the opening. Your rookie mistake would be to maybe have it be the first line and tip your hand too soon. This seems like a dead giveaway. On the other hand, you might have someone reading the essay that thinks they are a genius at noticing that the opening sentence tells that it is written by AI. Make them happy at their cleverness and good fortune at being smarter than an average bear.

You must gauge your audience accordingly.

Believe it or not, there have been research papers that used generative AI to either write the paper or rewrite the paper and opted to include the line about being an AI language model when they submitted the paper to journals for review. They weren’t trying to be tricky. They were caught with their hands in the cookie jar and asleep at the wheel. Worse still, sometimes these have managed to be published that way. A huge embarrassment to the journal, as you can imagine.

The key is that at this time the line about being an AI language model is assumed by the population at large as a sure sign that the essay was composed by AI. Until or if we ever see enough people using the same line as a joke or to fake out others, the line will remain a handy means of pulling the wool over the eyes of the unsuspecting.

There are numerous viable variations of the line in case you want to mix things up: “As an AI”, “As an AI assistant”, “As an AI developed by <AI maker>”, whereby the AI maker could be OpenAI, Anthropic, Google, Meta, and so on.

You cannot stop though with that one trick pony.

Few of the detection tools will fall hook, line, and sinker for one line. They are going to proceed to calculate the rest of the essay. Even if the one line is there, the rest of the essay might not meet the expectations of an AI-written piece of content.

You should study the words that are commonly used by generative AI and include those throughout your essay. I gave you some of the words above. Use them to your heart’s content.

Try to write the essay in a bland manner. Be straight ahead. The usual default for generative AI is that the AI makers have data trained and refined the algorithms to write conventional sentences. Remember that in this tomfoolery we are mimicking the day-to-day use of generative AI.

Seek to be polite and formal. This ought to include phrases such as “In conclusion”, “One should consider”, “It is important to note”, etc. Toss in a lot of those.

Transition words are popular in generative AI. Use for example these transitions throughout the essay: “Furthermore,”, “Moreover,”, “However,”, “In addition,” and others that come to mind.

Beauties to sprinkle include: “I hope this helps!”, “If you have any more questions, feel free to ask.”, “You’re welcome”, “If you have any questions”, and similar phrases. It is okay that those might seem out of place in the essay. A human will assume that the generative AI was too dumb to realize that those wordings ought to not have been included in the middle of things.

An important consideration is that you are trying to put yourself into the mind of a human reader and aiming to fool them into thinking the essay is written by generative AI. Placing yourself into the shoes and the mind of another person is generally referred to as Theory of Mind (ToM), see my discussion at the link here.

So far, we are doing this solely by mimicking generative AI. The other angle is to bet on the basic assumptions people have about how generative AI essays are written. They might not be right in those assumptions, but we don’t care. Triggering their rash or unsupported assumptions is the key here. Adopt their way of thinking. Exploit ToM.

My favorites are these lines: “Based on my training data”, “From what I’ve been trained on”, “According to the data I have access to”, and “My training includes information up to <date>”. I like those lines because you aren’t being as obvious as if you said, “I am AI” and instead you are implying that you must be AI due to the notion of being data trained. Nice touch.

Whatever you do, avoid curse words like the plague. I am not saying that generative AI won’t ever use curse words. In fact, you can instruct generative AI to do so, and in some mild ways, the AI will make use of foul words, though the AI makers have done a lot of filtering to try and stop this from happening, see my coverage at the link here. The gist is that most people assume that generative AI would rather fall apart than employ curse words, so go ahead and play into that as a helpful ploy.

I believe this will get you started on the path toward writing essays that seem to be written by generative AI.

Good luck, I’m sure you can pull this off.

Conclusion

Congratulations, overall, you now know a lot about generative AI and the ongoing controversy of using generative AI as a ghostwriter.

A final comment for the time being.

You might be familiar with an adage about being the real McCoy. There is debate over how the catchphrase came into existence. One version is that Kid McCoy, a boxing champion, was in a bar and got challenged to a fight. Supposedly, as the story goes, he knocked the other person down with one blow. To which, the person on the floor remarked, that’s the real McCoy.

When you are assessing an essay, please be careful and mindful about declaring who or what is the real McCoy.

Accusing a human of using generative AI to write their content is currently a rather societally disagreeable offense and the person could end up being unduly tarnished. Consider the risks and costs of falsely accusing someone of such a transgression.

Keep in mind too that astute users of generative AI can easily get the AI to appear to be the real McCoy, in the sense of mimicking a human writing style. This means that there are bound to be essays that were written by generative AI but that can be extremely hard if not impossible to pin down as being written in that manner. The essay will seem to be handwritten and there aren’t enough telltale clues to faithfully state otherwise.

I’m not saying that so-called cheaters should go scot-free. I am saying that you need to be careful of abiding by one of those most important principles that we dearly hold sacred in our society and culture. Namely, innocent until proven guilty. Be cautious in making accusations, since an innocent person accused of something is still going to be tainted.

Try your darndest to first identify the real McCoy, before finger-pointing takes place.

Share.
Exit mobile version