This, the latest installment of “Thoughts (and Thinking), The ELCS.ch blog (kind of)” is the last in a series of three posts that ask if—as a proofreader and editor—it’s worthwhile my buying a new red corrector’s pencil, or if it would be wiser to call it a day.
Writing: 4,600 years of technological progress
Writing skills are important, and not only for proofreaders like me. The last few decades have seen some very useful additions to the writer’s tool kit. But the latest of these additions claims to make all the others obsolete. And to make the writer obsolete too.
The technology of writing emerged over four and a half millennia ago, and time and again history reveals its value.
The Christian God’s Ten Commandments were not only spoken into Moses’s ear, they were also written in stone.
Martin Luther didn’t only formulate the 95 Theses that triggered the Protestant Reformation, he wrote them out and nailed them to the door of All Saints Church in Wittenberg. And then had hundreds of copies printed to ensure they were widely read.
The Founders of the United States of America took four months to write, amend, and agree the US Constitution—a document influenced by a careful reading of prior written charters such as the thirteenth century Magna Carta and of the writings of groundbreaking thinkers including John Locke, Montesquieu, and David Hume.
Fast forward to today, and the global literacy rate stands at around 85 percent. And from 100,000-word novels via 20,000-word research papers to the single-word social media post, a huge number of us write and read daily. The book—author Stephen King argues—is a “uniquely portable magic”.
For my part, I’ve spent the last 18 years working intensively with the written language. For the last 13 years this has been my sole occupation. I read. I correct. I propose. I explain. I write. I’ve had a very varied career. In it, I’ve never done anything for longer.
And writing skills aren’t for proofreaders and editors alone.
Anyone who writes needs them. And although they’re reading and not writing, readers obviously need them too. After all, almost all writing is written to be read and understood.
In the last few decades some very smart people have given us a number of tools that make the task of writing—and that of reading—easier.
Machine translation has come of age
Genuinely helpful grammar aids have come into being
Specific style guidance has entered the mainstream
Information technologies, meanwhile, have been applied to make all three ubiquitous. And all available in the same place: whether that be on our desktops or on our smartphones.
And then came generative AI, an advance that appears capable of doing much of the writing for us.
Is it possible that writing skills are no longer needed? Should we just trust AI to write for us?
Let’s see.
The state of the art of writing, and the arrival of generative AI
We have never had a wider range of useful writing aids at our fingertips, but at the same time generative AI claims to make writing a writer-free zone. The technology has been adopted at breathtaking speed. So what does it give us and what are the trade-offs? And as writers and readers, does generative AI deserve our trust?
The last 30 years have been a blessing for authors, proofreaders, and readers alike, filled with smart people applying technological advances to bring us a whole range of useful aids.
Machine translation, once the preserve of sectors with deep pockets—including, notably, software development—has become ubiquitous, affordable, and reliable, Google Translate launching in April 2006, DeepL in August 2017.
Grammar aids have proliferated since Mignon Fogarty launched Grammar Girl in 2006, Grammarly—for example—appearing three years later.
Style guidance has also entered the mainstream, and next year the Chicago Manual of Style will celebrate 20 years online.
Alongside all of these advances, dictionaries have become more easily accessible, Merriam-Webster launching online in 1996 and the Oxford English Dictionary following suit four years later, in the process making my 9 kilogram “Shorter” edition obsolete (although now useful as a door stop).
Then, in late 2022, some very smart people indeed brought us an altogether different proposition: generative AI. Different because—its proponents argue—it replaces machine translation, grammar aids, style guidance, and dictionaries. And can even replace the author.
The speed, breadth, and depth of take-up of this and other AI tools have been phenomenal, with barely a single field remaining untouched.
Generative AI in academia: Within months of the launch of ChatGPT-4 a renowned academic and research institution publishes its position: the chatbot creates “highly plausible fiction that oftentimes happens to be factually correct”. A year later and that position has shifted, expanding to include “use it for tasks such as proofreading”. By 2023 1 percent of academic papers published—so, 60,000 papers—were showing signs of generative AI use. [1]
In the arts: In January 2024, author Rie Kudan’s The Tokyo Tower of Sympathy won a prestigious Japanese literary award only for the author to announce that around 5 percent of the text was taken verbatim from ChatGPT. [2]
In the newsroom: A May 2024 survey of global newsrooms published by the World Association of News Publishers (WAN-IFRA) found that almost half already use generative AI, with only 1 in 5 having any form of guidance in place. [3]
And everywhere else: By the autumn of 2024, the top 40 AI tools were collectively receiving approaching three billion visits per month. ChatGPT—having broken records by securing 100 million users in its first 64 days—capturing over 80 percent of that traffic. [4]
In the words of ChatGPT itself, “AI-generated text is now commonly used for content creation, summarization, translation, and much more”.
So, what are all these people actually getting for their money, or—if they’re using some “free” incarnation of generative AI—for whatever it is they’re “paying”? Texts generated by AI are:
Typo-free
Grammatically correct
Delivered in seconds, in a wide range of languages
And the downside?
ChatGPT itself, when asked why one should not use it to author text, lists a stream of reasons, including lack of originality, risk of inaccuracy, lack of personal touch, potential for bias, and limited understanding of context.
Little wonder then that the aforementioned renowned academic and research institution tells its students to “thoroughly check AI generated contributions for plagiarism and correctness”.
Personally, I often find AI generated texts turgid, bland, and opaque. Which, if we knew which data these models have been trained on, might be unsurprising.
And there is—I’m convinced—one, crucial thing that generative AI really cannot do. (Can you see it too I wonder?)
So, while there is a growing consensus that generative AI can do the writing for us, consensus and truth are rarely one and the same thing. And the drawbacks and trade-offs involved in using AI to write for us are so great that it’s not a good idea to trust AI to do your writing for you. Or, at least, not yet.
“Things [including AI] can only get better”
But what about the future? Many futurists and proponents of AI tell us that today’s AI is the worst it will ever be and that it can only get better. But is this actually true? The answer may depend on other answers, in particular to three questions:
What “gets better”?
What does “get better” mean?
And for whom?
Or can they?
AI is a technology, and that technology is—given the eye-watering sums currently being invested in it—more likely than not to improve.
But we don’t interact with it as a technology. We interact with generative AI as a product or service.
And many products and services actually get worse over time.
Products: The concept of planned obsolescence has been with us for almost a century, having been coined by Bernard London in his pamphlet “Ending the Depression Through Planned Obsolescence”, in which he proposed legally establishing the programmed obsolescence of personal-use items to stimulate and perpetuate purchasing.
Thus, while my first electric toothbrush worked perfectly well for years, current models “break” (normally) just after their warranty runs out. And when, recently, my computer keyboard finally died after over a decade of service, its replacement physically fell apart in just a week. All of which make it extremely unlikely that we’ll ever see another “centennial bulb”, the name given to a commercially manufactured light bulb that was first switched on in 1901 as is still burning today. [5]
Services: For services in general and online services in particular there is Cory Doctorow’s concept of “enshittification”. [6] Which seeks to explain how both deteriorate over time, “user loyalty and satisfaction” being sacrificed on the altar of “business customers’ interests”, only for “business customers’ interests” to be, in turn, sacrificed on the altar of “shareholder value”. At which point, Doctorow says, the service “dies”.
Death, I’m sure we can agree, qualifies as “getting worse over time”.
So the statement, “Today’s AI is the worst it will ever be” has no basis in fact. As a product or service AI may deteriorate even if as a technology it improves.
And even technological improvement of generative AI shouldn’t be taken for granted. Some thinkers in statistics and machine learning suspect that in larger, multi-parameter models there is a classical bias–variance tradeoff at work, the models not necessarily performing better if measured out-of-sample. Rather, they develop a tendency to overfit the training data, which then creates poorer and poorer answers to unseen tasks. [7]
While this remains just one of many possible scenarios, it is certainly enough to summon the specter of technological deterioration to counterbalance the promise of always more and better to come.
In all of the above our focus has been on improvements in AI as an abstract thing. What would happen if we moved our focus from generative AI improving to what, concretely, generative AI improves for us?
Because it is absolutely possible that a technology “improves” and is deployed in products or services that deliver value to small groups in society all without delivering wider societal value at all.
In other words, when we say, “Things can only get better”, do we have a shared understanding of whom they will “get better” for? The answer to that question is clearly “no”.
And if we gave generative AI the benefit of the doubt?
But, for the sake of argument let’s assume that futurists and proponents of AI are right and that AI will only get better and so will overcome its present limitations. When this happens, should we then trust AI to write for us?
This requires careful consideration. Because handing the writing task over to AI may risk more than just our ability to write.
It has been said that clarity of thought precedes clarity of expression. [8]
But our efforts to express ourselves clearly also inform and hone our thinking.
So stopping exercising our ability to express ourselves may remove our capacity for clear thought.
ChatGPT itself is up front about this, listing “loss of critical thinking skills” as one of the dangers of having generative AI write for us. [9]
And without writing skills, and associated thinking skills, how will you be able to judge what AI “authors” for you?
If you don’t speak Finnish, you may go to, say, DeepL for an English to Finnish translation.
But as you don’t speak Finnish, you won’t be able to judge the quality of that translation.
And why should this be any different with an AI-to-English “translation”? (Spoiler alert: it isn’t.)
Why we shouldn’t trust generative AI to “write” for us
Is it possible that writing skills are no longer needed? Should we just trust AI to write for us?
If we don’t want to author texts that are turgid, bland, and opaque; if we don’t want to run the risk of lacking originality, risking inaccuracy, lacking a personal touch, potential bias, and only limited understanding of context; if we don’t want to have to “thoroughly check AI generated contributions for plagiarism and correctness”, we can’t currently trust AI to write for us, and the answer must be “Not yet”.
And even if at some point we can trust AI to write for us, we should think hard before doing so. And perhaps the answer should always be, “Not yet”.
Because there may be more at stake than just the time it “saves” us.
The language we use is important. Each individual’s participation in clear communication is intrinsically a societal good, encouraging helpful, productive use of the language we employ to coexist and prosper, promoting clarity of thought, and driving our ability to perceive truth and to differentiate it from lies.
And I hope we can all agree that—even at the cost of a little time and effort—that is something worth aiming for.
Epilogue
My new, red corrector’s pencil arrives tomorrow. I chose a particularly long model.
And that one, crucial thing that I’m convinced generative AI really can’t do? I’ll keep that to myself for now if that’s OK with you.
I can only say that when I ordered the red pencil, I added a blue one to the basket. And two green ones.
Because good writing isn’t a mechanical process.
And because there’s much more to editing than just putting a red line through mistakes.
[1] Scientific American: https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/
[2] The Daily Beast: https://www.thedailybeast.com/novelist-rie-kudan-scoops-literary-prizethen-reveals-she-used-chatgpt/
[3] https://wan-ifra.org/2023/05/new-genai-survey/
[4] The World Bank: https://blogs.worldbank.org/en/digital-development/who-on-earth-is-using-generative-ai-
[5] https://en.wikipedia.org/wiki/Centennial_Light
[6] https://en.wikipedia.org/wiki/Enshittification
[7] More on this here: AI Math: The Bias-Variance Trade-off in Deep Learning | by Tarik Dzekman | TDS Archive | Medium
[8] More on this here from elcs.ch: https://elcs.ch/blog/clarity-of-thought-precedes-clarity-of-expression-most-of-the-time
[9] Or it told me so on Monday, but had changed its answer by Wednesday. But don’t just take my word for it. For the more technically minded, there’s more on this here: https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers/ . Thanks go to Rafael Brown for bringing this paper to my attention.
[Illustration by Dietmar Rabich via Wikimedia Commons: colored pencils.]