Why I can’t wait for generative AI to replace people writing

Art Leyzerovich
5 min readMar 23, 2023

--

There is a wave of worry about ChatGPT et al and how it threatens writing, writers, education, essays, testing, information consumption, media networks, etc.

Currently, many people write many things, and most people’s writing is either average or below average (by definition). A lot of this average writing appears on LinkedIn for example, as comments, new job announcements, shared personal stories, recruiting outreaches, and so on. The same is true for most published news, articles, studies, briefs, reports, etc. 90%+ of the words I read on a daily basis are useless filler (and I’m pretty selective about what I read). The worst class of writing is probably corporate communication, internal and external (such as from CEOs or marketing teams).

To some degree, this is a personal/mental preference. I trained early in life in maths and computer science, and, in both disciplines, one of the key measures of quality is based on the concept of parsimony - doing a mathematical proof in as few steps as possible or writing a piece of code to achieve a task with as few instructions as possible. So when I read something, I expect all of the words to have value — to convey new information, to frame the issue uniquely, to illustrate a specific scenario, to share a personal feeling. Most things I read (including on LinkedIn) completely fail to do that.

So you might ask me: if you hate most writing so much, how will generative AI, which will produce immeasurably more average (and poor) writing, help?

Well, follow me a few steps forward.

The more generative AI-written stuff is out there, the more obviously banal and boring will be the writing of people who don’t write really well. The contrast between their writing and the 10,000 other pieces of writing that look exactly the same will place into sharper relief the weakness of their effort. At that point — if you can’t write better than a machine can — if your writing is basically the equivalent of linguistic pattern matching and picking some adjacently related facts from the internet — then stop writing!!

If you’re about to make a LinkedIn post that says “I’m so excited to be joining Company X, where I’ll be working with brilliant people, on an amazing mission,” then don’t post it. Just don’t do anything, don’t bother. I’ve read that message a thousand times. And there are 10k CPUs out there right now churning out the same text. Just add your new job to the career section and save us all the time it takes to read something that could have been written by ChatGPT.

In fact, I’m going to start pointing out more and more when people write to me something that looks like it could have easily been written by AI. “Hey, did you use ChatGPT to write that?” will be my new trolling technique, by which I hope to either nudge them to stop writing altogether or write better than ChatGPT can.

If you think you’re a journalist that is going to use ChatGPT to do your job, I’m going to stop reading your junk. If you think you’re a recruiter that’s going to use ChatGPT to write me solicitations, I’m not going to be impressed. If you’re about to make a post on LinkedIn along the lines of “ChatGPT, write a social media post about a personal failure I had and how I overcame it in a vulnerable tone for a business audience”, then please write it all by yourself and make it really, really good.

If this results in more people writing nothing and fewer people writing really good stuff, then I think we all win. I urge you to adopt my nudge framework, too. Also, to stop writing poorly.

In the end, of course, the title of this article is a bit misleading. Taken to its logical conclusion, I posit that, after AI replaces or dilutes out by sheer volume poor and average writing, it will so oversaturate our eyeballs that we will simply stop reading any of it, at which point writing it will be pointless, and poor and average writing will become obsolete… and disappear. Instead, parsimony will prevail, and more things that require communication will be written as a set of basic points or facts or very short summaries that communicate very specifically something that one person wants to say, rather than what someone thinks others need to hear. There will be less need for other kinds of expansive writing and, consequently, less need for ChatGPT.

And if you give me something wordy and longer than than the key points, I’m probably just going to use ChatGPT to summarise it back down. Consider that we’ll just end up with a self-suppressing arms race — people who use ChatGPT to expand their core ideas/points/facts with filler will just be countered with other users of ChatGPT squeezing the message back down to its essence before consuming it.

Then we won’t have to read another CEO’s annual letter about what a challenging year it has been and how the company rose to the occasion despite the headwinds, and it was all thanks to the people and the company culture and how we continue to make progress towards our goals, even though it was one of the hardest things to do to let 10% of staff go last year, and how proud the CEO is of the team (and everyone that’s still at the company).

Happy days ahead! Bring on generative AI!

Some notes:

In the few days I was thinking about this essay, I came across a few related things. One was the South Park episode (Deep Learning) that did a funny job poking fun at its own use of generative AI. Fortunately, the non-AI part of the show was still better than the AI part, but I’m sure the South Park boys know they need to keep their game up, or else. There was also a hilarious part of the plot where kids were using ChatGPT to message chat with each other. Of course, the natural extension of that is chats (or LinkedIn posts/comments) with ChatGPT running both sides of the conversation.

I also read an article in The Atlantic (high-quality writing) about how AI will change standardised testing, in which the author makes a parallel argument to mine that “computers might reveal the conceit of valuing tests so much in the first place.” If it’s so easy for a computer to do a standardised test, why even bother doing them (or at least doing them in the way that we have been until now).

Finally, on the dialectical issue of whether fewer people writing (anything) or just writing less is actually a good thing for society: I’m going to leave that question on the table for the moment. (Perhaps I’d start by saying it could follow the same logical arguments as whether people still need to learn cursive writing or to do trigonometry vs. just letting calculators handle it — yeah sure, kids these days can’t solve for tangents but… who cares? I haven’t solved for a tangent since I was 20 and haven’t written a letter by hand for 20 years.) At least on the clearly positive side of the ledger, my argument does not result in less reading overall, but rather everyone reading things of higher quality.

--

--

Responses (1)