Artificial IntelligenceMarketing

Spotting AI junk words: Why AI still can’t write like humans

If you don’t pay attention to AI junk words, you’re liable to get stung. Photo by author David E. Sweenor

In an age where AI seems to be doing it all, from writing never-ending sales cadence emails,  generating mediocre marketing blogs, or drafting emoji-laden social posts, there’s a new problem we often overlook: the language itself. AI-generated text, while polished, tends to fall back on the same overused words and phrases, often making content sound generic and uninspired. Have you ever noticed how tools like ChatGPT seem to love words like “delve”? I abhor it and can’t count the times I’ve seen “delve” pop up in AI-assisted phrasing—it’s supposed to add depth, but usually just feels empty. Leaders in business and tech need to understand these junk word language patterns to effectively communicate and avoid AI monotony.

Repetitive language in AI: The hidden trap

What if the messaging in your business content started sounding flat, even hollow? Or, on a more personal level, what if a loved one’s college essay or term paper was flagged for being unoriginal simply because an AI reviewer found it too predictable? Large Language Models (LLMs), the driving force behind today’s AI text tools, are trained on enormous corpora—hundreds of thousands of documents that shape how they “talk.” And these datasets aren’t exactly models of creativity; they’re often packed with the same corporate jargon and buzzwords that we see every day.

Using AI-generated text can feel like eating from an endless buffet where the dishes look different, but the flavors are all the same. Words like ‘innovative,’ ‘transformational,’ and ‘pivotal’ keep showing up, dressed in slightly different sentences, but offering little new taste or substance. These words don’t add real meaning—they fill space. This repetition may first seem innocuous, but it risks making your messaging and content sound generic. Over time, these overused phrases weaken the impact of AI-generated text. In an executive report or client presentation, that kind of redundancy can subtly but surely erode trust. As soon as I see the word ‘delve” in an article, I’m done reading it.

From scientific publishing to business: AI’s impact on credibility

This problem isn’t just limited to business content. AI-generated text is increasingly common in scientific publishing, raising alarms among researchers and editors. Scientific American recently warned that chatbots are producing significant volumes of scientific text, which, while technically accurate, often feels hollow due to the same repetitive phrasing.[1] We’re living in an era where the fundamentals of scientific research are at stake. In published research, this kind of repetitive AI language undermines credibility. Studies start to blend together, using the same AI-generated vocabulary, which makes it difficult to distinguish original insights from the morass of overused terms.

In business, the stakes are just as high. People buy from people—and businesses buy from vendors they trust to understand their unique challenges. When client-facing reports start to sound templated, filled with generic descriptors and predictable phrasing, clients may question whether the expertise behind them is truly tailored to their needs. Leaders who rely on AI for reports, sales pitches, and communications must stay vigilant to maintain authenticity and credibility. If they don’t, what happens to the trust and connection that clients expect?

Avoiding AI junk words: Tips for keeping your communication crisp

For leaders, managing the perception of AI-generated content is as important as managing its quality. AI-generated text produces a prolific amount of redundant phrasing, a tendency that researchers describe as “right-branching adverbial clauses.” This means adverbs are often pushed to the end of sentences, as in “The report was completed successfully.” Structurally, this sounds a bit awkward, especially in executive communications where clarity and directness matter. Compare that with a more straightforward, active sentence: “The team successfully completed the report.” The difference may seem minor, but these small adjustments go a long way in making content feel intentional and clear. Do LLMs write like humans? Variation in grammatical and rhetorical styles is an excellent scientific paper on the subject if you’d like to learn more.[2]

I first noticed these patterns last year when I began writing more blogs and started using Grammarly to refine my grammar. Every time I accepted Grammarly’s changes without question, my content would get flagged by AI content detectors on Medium. Curious, I ran a test: I submitted the same blog, one version with Grammarly’s rephrasing suggestions and one without. Sure enough, the version with Grammarly’s edits triggered the content detectors. This confirmed my suspicions, which Copyleaks also addresses in a blog post on how AI-driven grammar tools can set off these detectors.[3] Some of this was also discussed in my blog post, Escaping Generative AI Mediocrity.[4]

https://tinytechguides.com/blog/escaping-generative-ai-mediocrith

When using AI to help with content outlines, I often saw phrases like ‘it is important to note that…’ or ‘a significant aspect is…’ peppered throughout the text, adding bulk but little value. In business communication, filler phrases don’t just take up space—they risk pulling readers’ attention away from what really matters.

Practical tips for leaders

Now that you’re aware of the uninspiring content AI can churn out ad nauseam, what can you do about it?

  1. Treat AI text as a first draft: Always approach AI-generated content as a rough draft. Look out for overused words like “innovative” and remove AI junkwords that don’t add value to the message.
  2. Train your team and your AI: Equip your team with a checklist to refine AI-generated text. Encourage them to replace weak phrases like “it can be argued that” with direct statements such as “we believe that.” These guidelines help keep messaging sharp and relevant. If you’re using AI regularly, train it to reflect your brand’s voice and unique organizational context.
  3. Edit, refine, and iterate: AI content isn’t “set it and forget it”—this isn’t Ronco! Regularly review for right-branching clauses and rearrange sentences to keep them active. For example, change “the analysis was conducted thoroughly” to “the team thoroughly conducted the analysis.” These adjustments add clarity and a confident tone.

As your organization explores AI’s potential for communication, treat AI as a support tool, not a replacement. With human oversight, AI-generated text can meet your content needs without sacrificing quality.

Don’t forget the humans

In a world where AI can produce endless content but often misses the mark on originality and depth, human oversight is more essential than ever. While AI-generated text offers speed and consistency, it lacks the nuance, insight, and authenticity that only people can provide. As leaders, it’s up to us to ensure our communication stays grounded in a human voice—one that resonates with clients, colleagues, and audiences alike. AI may keep churning out ‘innovative’ phrases that we can ‘delve’ into, but let’s not forget the humans who bring true meaning and connection to our words.


If you enjoyed this article, please sign up for my newsletter. Consider following me on Medium and LinkedIn.


Please consider supporting TinyTechGuides by purchasing any of the following books.

● The Generative AI Practitioner’s Guide: LLM Patterns for Enterprise Applications

● Generative AI Business Applications: An Exec Guide with Life Examples and Case Studies

● Artificial Intelligence: An Executive Guide to Make AI Work for Your Business

● The CIO’s Guide to Adopting Generative AI: Five Keys to Success

● Mastering the Modern Data Stack

● Modern B2B Marketing: A Practioner’s Guide for Marketing Excellence


[1] Stokel, Chris. 2024. “AI Chatbots Have Thoroughly Infiltrated Scientific Publishing.” Scientific American. https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/.

[2] Reinhart, Alex, David W. Brown, Ben Markey, Michael Laudenbach, Kachatad Pantusen, Ronald Yurko, and Gordon Weinberg. 2024. “Do LLMs write like humans? Variation in grammatical and rhetorical styles.” arXiv. https://arxiv.org/html/2410.16107v1.

[3] “Do Writing Assistants Like Grammarly Get Flagged As AI?” 2024. Copyleaks. https://copyleaks.com/blog/do-writing-assistants-get-flagged-as-ai.

[4] Sweenor, David. 2024. “Escaping Generative AI Mediocrity | by David Sweenor.” TinyTechGuides. https://tinytechguides.com/blog/escaping-generative-ai-mediocrity/.


Sign Up for Our Newsletter