I am a journalist, writer and editor, and I have been thinking about the implications of large language models (LLMs) on my work since its launch.
I know many who think LLM bots are similar to a confused undergraduate student who can find answers on the internet but is only sometimes reliable: it generates fictional information, provides nonsensical responses, and lacks the creative spark that hits the human core.
I get it. It's not perfect. But dismissing it as a fad is a naive mistake. Any discussion on AI and Writing should start at the opposite end: acknowledging this new technology's incredible capabilities.
Learning how to chat with bots
The idea of a machine churning clean prose is daunting: how can it?
I remember hearing Noam Chomsky say language and thought are the core of our being — these two species properties set us apart from the animal world. "There's no thinking in the world or maybe in the universe in anything comparable to what we have," the American linguist told the New York Times. "We are somehow capable of constructing in our minds an unbounded array of meaningful expressions. Mostly, it happens beyond consciousness. Sometimes, it emerges to consciousness."
So how is a machine able to do this? I could not wrap my head around its mechanics when I first started playing with the bots, and even though I now know a little more about how the underlying models work, I still can not fully grasp what's happening.
But I am clear about this: sceptics have not learnt how to chat with the bots and get it to do things. As economist Tyler Cowen has noted in a Bloomberg columnn: "To use it well, you need to let go of some of your intuitions about talking to humans. ChatGPT is a bot."
That aligns with my experience. After months of playing around, my intuitions for LLMs have significantly improved, and I know where and how to use it and where not to use it. (Don't ask it to write a poem for your dog and say it sucks.)
I have also accepted one inevitable fact: the models are improving exponentially. They are constantly learning, and their responses will only improve.
You say they do not know about current events? Wait for a while — it will get there.
It's short on credible facts? Train the model on a self-curated source-limited data set — that is, let the model query and base its knowledge on information sources you trust.
It does not offer links? A search and retrieval system makes this trivial.
The point is this: poking holes in LLMs’ existing capabilities won't help writers build a good mental model of what to do in a world where machines can write. It will do many things it can't do today — in weeks or months from now.
So accept it. Assume LLMs can do more than you think they can. And then think hard about if and how you want to use it in your writing.
Some ground rules
After months of experimenting, I landed on some rules for myself.
Let's start with what I won't use LLMs for — things that bots can do much faster and maybe even better, but I still won't let it do those things for me.
1. There is no shortcut to learning writing. You have to do the grunt work.
I know so many people who want to be writers. They have ideas they want to share with the world. They want to tell a story. They want to share their experiences.
What sets apart those who fulfil their aspirations from those who don't is simple: the former sit down and start writing. That's it. Seriously.
And the more you write, the more you learn what you cannot do on the page. It's frustrating: you can imagine something in your head — an idea, an argument, an experience — but it's not translating into words.
If you have the mental fortitude to hang first for months and then for years, magic happens: you look back at your old drafts and can't believe what you wrote five years ago. You feel embarrassed, but you also feel proud to see your progress. You still don't feel good enough, but you can see how much you have grown.
Every single writer I know, without exception, goes through this phase.
But it doesn't happen just like that, and it does not happen for everyone. It happens with those who take the craft of writing seriously and work on it. They work on the structure of stories, the clarity of arguments, the construction of paragraphs, the sound of sentences, and the choice of words.
It's like building muscle. You have to hit the gym and lift weights. And as any good fitness coach will tell you: consistency matters more than perfection. You need to show up and do your thing. Stop for a while, and you lose your gains. (Ask me about it. Sigh!)
But hey, you have steroids to boost muscle growth—faster flex. But you also know it does not sustain; it's not good for your body.
Think of LLMs as just that: a steroid for writing. If you learn to use it well, you can churn out impressive sentences. But it doesn't help you learn writing.
If you are serious about writing, get yourself a copy of Strunk and White and work on your writing.
Resist the temptation to use a steroid. You can use it as a writing assistant or a coach—which I will talk about later—but don't ignore the painfully rewarding work of learning to write.
2. Writing is thinking
Most people imagine writing is the output of what you want to say. Wrong.
So much of my thinking happens when I sit down and write. I can't think if I don't write. It's a loop: good thinking makes you a better writer, and good writing makes you a better thinker.
Hear it from Paul Graham: "A good writer doesn't just think, and then write down what he thought, as a sort of transcript. A good writer will almost always discover new things in the process of writing. And there is, as far as I know, no substitute for this kind of discovery."
Hear it from Shane Parish: "Writing requires the compression of an idea. When done poorly, compression removes insights. When done well, compression keeps the insights and removes the rest. Compression requires both thinking and understanding, which is one reason writing is so important."
Not learning how to write means not learning how to think. And if you are not thinking hard enough or clear enough, well, you won't have much to offer in your writing. Loop.
Extracting meaning is hard. Thinking is hard. So again: do the hard things yourself.
This is why I'd say use summarisation — which bots do pretty well — cautiously: that ability makes sense for a resource-crunched content website that wants automated summarisation for SEO for serving various readers. But not ideal for those who want to share their own ideas with the world.
For example, if you read a 5,000-word article on some big polarising debate you are trying to understand, don't just feed it into ChatGPT for your notes. Think about what you read, and write down what you understood. Same for academic papers. Or a book. There are exceptions, and I will discuss them below, but as a rule, keep this critical part of learning and writing from the bots. Won't serve well in the long run.
3. Reading like a writer
Reading has various motivations, from accessing facts to enhancing understanding to pure entertainment.
I have another reason: improving my writing. When reading a non-fiction book to gather material, I look for anecdotes, stories and ideas. I note down things that strike me, stuff I can use in my writing.
But I also read for style: I like unpacking why a particular choice of words or narrative structure worked for me and why something didn't. I learn a lot from thinking about what the writer is trying to do on the page, breaking down their narrative structure, and extracting lessons.
Which is why I don't read book summaries. They don't serve me. I imagine a future where it would become easy to query books for knowledge — and that's great, I would use it as a research tool — but that, at least for me, is not a replacement for reading the book itself.
Because reading the book improves my writing, and I can't outsource this task to ChatGPT completely — only partially.
4. Find your voice
This is the most crucial bit. It takes years to build a writing voice. I have made some progress, but I am still discovering it. This is where authenticity comes in — your voice, your words, your artistic choices. In a world where so much content will be produced by machines, the writer's unique voice will matter even more. Work on it.
So for one last time: if the bots are doing the writing for you…how will you find your voice?
Everything I have stated above is rooted in my personal experience, which is why I appreciate the importance of it. And I started with "how I won't use LLMs" to set context on my approach: don't let technology be in the driver's seat and dictate its abilities to set your goals. Do the opposite: set your goals, figure out what you want to do, and identify places where tech can add value.
LLMs as language tutor and thoughtful friend
Let's talk about that now: how I am using LLMs to become a better writer.
1. Research assistant and thoughtful friend
When there was no internet, we went to bookstores and libraries to access new information. As someone who grew up with Google, I find it hard to imagine what that world looked like. The internet has exposed me to many writers and thinkers, their ideas and perspectives, that I might have never encountered. And LLMs are the next big jump in information discovery.
This is really exciting. For example, I often have a vague idea about a concept, and Googling does not help because my query is not specific. I want to read about something, but I don't know how to find it.
Dumping down raw thoughts on ChatGPT and asking it to tell me "what others have said about it" gets me the words to kickstart my thinking. It's so good. It gives me the direction I need. In the same way, I have used the bots to find books that can help me research a niche topic. (I wish I were friends with librarians!)
I was writing something a few weeks ago and forgot some basic high school physics lessons. The bots do an excellent job explaining these things, so I had a ten-minute learning session to refresh the basics, plus a little more, and with the fundamentals clear, I could write the piece I was writing more effectively. (Trick: use “explain this so a teenager can understand” prompt when trying to wrap your head something — try it!)
LLMs have also helped me when I am stuck reading some heavy text. As I write this, I am struggling to understand a classic philosophical novel. Because I don't have a community interested in that specific book, I use a bot as my friend to help me ask things like, "what do you think this guy meant when he said this" — it gives me great ideas!
2. Brainstorming partner
I use a bot to challenge my arguments by giving it prompts like "tell me how this could be critiqued". It quickly gives me perspectives, which sometimes exposes me to views I hadn't thought of, which allows me to think about my argument more rigorously, which makes my thinking better.
A couple of months ago, I was toying around with an idea for a new long-term writing project (which didn't eventually materialise) and turned to a bot to ask for ideas, and again, excellent stuff.
The idea of brainstorming is not to reach answers: it's an exploration. I work as an independent writer, I don't have a full-time job, I don't have colleagues around me every day to discuss every idea of mine, and in this scenario, just having a bot bounce off some ideas is a good push.
I don't think it can replace having the same chat with an editor or another writer — and fortunately, I have friends who talk to me about my writing projects — but I like the possibility of quickly getting access to some intelligent thoughts.
3. Language tutor
I also use LLMs to strengthen my language fundamentals. Give it two sentences and ask it to explain which one is the correct usage and why. Or paste a paragraph, ask it to fix grammar, observe what changes were made, and think why.
I also use it to find more words, to describe, say, the atmosphere of a cafe or the material of a chair or the face of a person or some specific behaviour — thesaurus fails me at times, and I need a friend's help to break my block, but again, a bot can help. With the right prompt, it throws suggestions I can use and choose for myself.
4. Beta reader
When I ask ChatGPT this simple question — “what do you think about this? Is the writing clear?” — for a first perceptive feedback, the response is often insightful. As a next step, I ask “what suggestions do you have to make it better?” to get it to point to more specifics than generic comments, and I’ve also found this quite useful.
This is also the part I wish I had more control on: I do not want an LLM to rewrite text or introduce new things in my text without me asking it to. But sometimes, it does: it interprets asking feedback as a request for rewrite, which I am not looking for, and hurts my experience.
And at times of rewriting and fixing grammar, unless specifically and carefully instructed, it does not explain why those changes were made, which does not help me learn from the improvements. So as much as I would like for a bot to serve as a thoughtful editor, it does not always go with what a good editor actually does: they help you grow as a writer.
I haven’t explored AI writing tools in the market yet, but I think many will benefit from products that are not single-handedly aimed at improving productivity and efficiency, but in giving us smart assistants that make us do our jobs better. And grow.
That’s what I have learnt for now.
To sum it up: don’t let a bot do the writing for you. Instead, break down the stages of writing, from research and ideation to language and narrative, and see how LLMs can add value, without making you lazy to do the hard work. In a world which seems to be changing so fast, the only way I have learnt to confidently get through is sticking to the fundamentals — because that’s the only thing that does not change.
← More blogs