Do your own writing (alexhwoods.com)

by karimf 239 comments 748 points
Read article View on HN

239 comments

[−] roadside_picnic 46d ago
I've long considered writing to be the "last step in thinking". I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking.

However, there is a lot of writing that is basically just an old school from of context engineering. While I would love to think that a PRD is a place to think through ideas, I think many of us have encountered situations, pre-AI, where PRDs were basically context dumps without any real planning or thought.

For these cases, I think we should just drop the premise altogether that you're writing. If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.

Not long ago my engineering team was trying to enforce writing release notes so people could be aware of breaking changes, then people groaned at the idea of having to read this. The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.

I think it's going to be awhile before the full impact of AI really works it's way through how we work. In the mean time we'll continue to have AI written content fed back into AI and then sent back to someone else (when this could all be a more optimized, closed loop).

[−] Aurornis 46d ago

> When I send somebody a document that whiffs of LLM, I’m only demonstrating that the LLM produced something approximating what others want to hear. I’m not showing that I contended with the ideas.

This eloquently states the problem with sending LLM content to other people: As soon as they catch on that you're giving them LLM writing, it changes the dynamic of the relationship entirely. Now you're not asking them to review your ideas or code, you're asking them to review some output you got from an LLM.

The worst LLM offenders in the workplace are the people who take tickets, have Claude do the ticket, push the PR, and then go idle while they expect other people to review the work. I've had to have a few uncomfortable conversations where I explain to people that it's their job to review their own submissions before submitting them. It's something that should be obvious, but the magic of seeing an LLM produce code that passes tests or writing that looks like it agrees with the prompt you wrote does something to some people's brains.

[−] nerevarthelame 46d ago
I agree with most of this, but my one qualm is the notion that LLMs "are particularly good at generating ideas."

It's fair enough that you can discard any bad ideas they generate. But by design, the recommendations will be average, bland, mainstream, and mostly devoid of nuance. I wouldn't encourage anyone to use LLMs to generate ideas if you're trying to create interesting or novel ideas.

[−] CharlesW 46d ago
The title and of this article is Don't Let AI Write For You, when its point seems to be closer to Don't Let AI Think For You (see "Thinking").

This distinction is important, because (1) writing is not the only way to faciliate thinking, and (2) writing is not neccessarily even the best way to facilitate thinking. It's definitely not the best way (a) for everyone, (b) in every situation.

Audio can be a great way to capture ideas and thought processes. Rod Serling wrote predominantly through dictation. Mark Twain wrote most of his of his autobiography by dictation. Mark Duplass on The Talking Draft Method (1m): https://www.youtube.com/watch?v=UsV-3wel7k4

This can work especially well for people who are distracted by form and "writing correctly" too early in the process, for people who are intimidated by blank pages, for non-neurotypical people, etc. Self-recording is a great way to set all of those artifacts of the medium aside and capture what you want to say.

From there, you can (and should) leverage AI for transcripts, light transcript cleanups, grammar checks, etc.

[−] jasoneckert 46d ago
For me, drawing the line as to when you will leverage AI and when you won't comes down to a quote from Kurt Vonnegut: "Practicing an art, no matter how well or badly, is a way to make your soul grow, for heaven's sake. Sing in the shower. Dance to the radio. Tell stories. Write a poem to a friend, even a lousy poem. Do it as well as you possibly can. You will get an enormous reward. You will have created something."

Art is where I choose to draw the line, for both ideation and content generation. That work report I leveraged AI to help flush out isn't art, but my personal blog is, as is anything I must internalize (that is thoroughly understand and remember). This is why I have the following disclaimer on my blog (and yes, the typo on this page is purposeful!): https://jasoneckert.github.io/site/about-this-site/

[−] stephen_cagle 46d ago
Writing (unassisted) is probably the first step towards your own independent thoughts.

I'm reminded of that scene in "Ghost in the Shell" where some guy ask the Major why he is on the team (full of cyborgs) and she responds something along the line of "Because you are basically un-enhanced (maybe without a ghost?) and are likely to respond differently then the rest of us; Overspecialization is death."

I think a diversity of opinion is important for society. I'm worried that LLM's are going to group-think us into thinking the same way, believing the same things, reacting the same way.

I wonder if future children will need to be taught how to purposely have their own opinions; being so used to always asking others before even considering things on their own? The LLM will likely reach a better conclusion than you would on your own, but there is value in diverging from the consensus and thinking your own thoughts.

https://stephencagle.dev/posts-output/2025-10-14-you-should-...

[−] enduku 46d ago
I feel like LLMs are just forcing me to realize what writing actually is. For me, writing is basically a mental cache clear. I write things down so I can process them fully and then safely forget them.

If I let an LLM generate the text, that cognitive resolution never happens. I can't offload a thought i haven't actually formed - hence am troubld to safely forget about it.

Using AI for that is like hiring someone to lift weights for you and expecting to get stronger (I remember Slavoj Žižek equating it to a mechanical lovemaking in his recent talk somewhere).

The real trap isn't that we/writers willbe replaced; it's that we'll read the eloquent output of a model and quietly trick ourselves into believing we possess the deep comprehension it just spit out.

It reminds me of the shift from painting to photography. We thought the point of painting was to perfectly replicate reality, right up until the camera automated it. That stripped away the illusion and revealed what the art was actually for.

If the goal is just to pump out boilerplate, sure, let AIdo it. But if the goal is to figure out what I actually think, I still have to do the tedious, frustrating work of writing it out myself .

[−] modriano 46d ago
This reminds me of a talk I attended many years ago given by the director of UChicago's writing program (and found a recording of the talk [0]), and his thesis was that writing IS the process of thinking. That talk changed the way I write and made writing a primary tool I reach for when I want to learn something new.

Words / language are the great technology we've made for representing ideas, and representing those ideas in the written word enables us to evaluate, edit, and compose those smaller ideas into bigger ideas. Kind of like how teachers would ask for an explanation in my own words, writing down my understanding of something I'd heard or read forced me to really evaluate the idea, focus on the parts I cared about, and record that understanding. Without the writing step, ideas would easily just float through my mind like a phantasm, shapeless and out of focus and useless when I had a tangible need for the idea.

I am glad I learned to write (both code and text) long before Claude came online. It would have been very hard to struggle through translating ideas from my head into words and words (back) into ideas in my head if I knew there was an "Easy button" I could hit to get something cogent-sounding. I hope a large enough proportion of kids today will still put in the work and won't just end up with a stunted ability to write/think.

[0] https://www.youtube.com/watch?v=vtIzMaLkCaM

[−] PaulRobinson 46d ago
Outsource things that aren't valuable to you and your core mission. Do the things that are valuable to you and your core mission.

This applies at a business level (most software shops shouldn't have full-time book keepers on staff, for example), but applies even more in the AI age.

I use LLMs to help me code the boring stuff. I don't want to write CDK, I don't want to have to code the same boilerplate HTML and JS I've written dozens of times before - they can do that. But when I'm trying to implement something core to what I'm doing, I want to get more involved.

Same with writing. There's an old joke in the writing business that most people want to be published authors than they do through the process of writing. People who say they want to write don't actually want to do the work of writing, they just want the cocktail parties and the stroked ego of seeing their name in a bookshop or library. LLMs are making that more possible, but at a rather odd cost.

When I write, I do so because I want to think. Even when I use an LLM to rubber duck ideas off, I'm using it as a way to improve my thinking - the raw text it outputs is not the thing I want to give to others, but it might make me frame things differently or help me with grammar checks or with light editing tasks. Never the core thinking.

Even when I dabble with fiction writing: I enjoy the process of plotting, character development, dialogue development, scene ordering, and so on. Why would I want to outsource that? Why would a reader be interested in that output rather than something I was trying to convey. Art lives in the gap between what an artist is trying to say and what an audience is trying to perceive - having an LLM involved breaks that.

So yeah, coding, technical writing, non-fiction, fiction, whatever: if you're using an LLM you're giving up and saying "I don't care about this", and that might be OK if you don't care about this, but do that consciously and own it and talk about it up-front.

[−] nbaksalyar 46d ago
I have a feeling that the same idea absolutely does apply to code. Writing code is much closer to writing prose than it may seem. And the act of writing code also makes you think as you write. Even if you're writing boilerplate. Because how else would you uncover subtle opportunities to reduce the boilerplate and introduce new, better abstractions?
[−] ozozozd 46d ago
I realized that running one’s own writing through an LLM reduces the amount of information in it. Sort of like washing the nutrients of a fruit.

When we write about something, inevitably, things about us leak into our writing. How we think about this thing, our value judgments about it, how much we thought about it, whether our perspective and thoughts on it are aged or fresh all come through, even if we don’t intend to. All of this information builds trust, helps the reader empathize and see our point of view.

When our writing passes through an LLM, most of these are simply lost. An average expression of those thoughts with all the sharp edges - its character, essence - removed comes out.

All writing is opinionated, and when it runs through an LLM, it comes out opinion-less. I noticed that I don’t care for opinion-less writing. Or people.

One exception is the official Python documentation. I recently read some of the new documentation, and realized that it reads almost exactly as I first read it in 2010. I couldn’t believe it. Low opinion, high information density. I know for a fact that it has opinions in parts, but it’s shockingly infrequent.

[−] locusofself 46d ago
I had an interesting experience the other day. I've been struggling with some lyrics to a song I am writing. I asked Claude to review them, and it did an amazing job of finding the weak lines and best lines, and nearly perfectly articulating to my why they were weak or strong. It was strange because the output of the analysis almost perfectly mirrored my own thoughts.

When I asked it for alternatives/edits, they were not good however.

[−] jcalvinowens 46d ago
I feel very much the same way about debugging: it is through the process of repeatedly being slightly wrong that I come to actually understand what is happening.

Sometimes an LLM can shortcut me through a bunch of those misunderstandings. It feels like an easy win.

But ultimately, lacking context for how I got to that point in the debugging litany always slows me down more than the time it saved me. I frequently have to go backwards to uncover some earlier insight the LLM glossed over, in order to "unlock" a later problem in the litany.

If the problem is simple enough the LLM can actually directly tell you the answer, it's great. But using it as a debugging "coach" or "assistant" is actively worse than nothing for me.

[−] jilles 46d ago
The author articulates perfectly what I think too. I’d recommend for everyone to read Writing to Learn by William Zinsser. It’s an incredible book showing that you can learn anything by writing about it.

With an LLM doing all the writing for you, you learn close to nothing.

[−] _pdp_ 46d ago
LLMs are not very good at generating ideas - unless you ask them to go crazy - then they can generate interesting ideas that quickly become repetitive. But the initial set is actually good in my own experience.

As for writing, we need to keep in mind that LLMs are tools that augment. So yes if you completely abdicate all responsibility to the LLM that is basically not constructive at all. But if you use it as a tool - what difference does it make? Spell and grammar checkers are also changing your text and of course I am exaggerating a little.

And I do think LLMs can help you think better but not in a default mode. It is not about prompting skills but making it work the way you want it. And that takes time because well, it is not deterministic and it requires understanding how you generally think and write. Most of the time might not be possible. For others it works really well, maybe because they write like an LLM?

Btw, we often forget that English is not native for majority of people on this planet.

IMHO using LLMs to express themselves clearly is many times better than remaining misunderstood.

[−] firefoxd 46d ago
I'm 100% an advocate for not using LLM for writing... But I'll tell you were I use them just for that. For ceremonies.

A large part of our work is about writing documents that no one will read, but you'll get 10 different reminders that they need to get done. These are documents that circulate, need approval from different stake holders. Everybody stamps their name on it, without ever reading it. I used to spend so much time crafting these documents. Now I use an LLM, the stakeholders are probably using an LLM to summarize it, someone is happy, they are filed for the records.

I call these "ceremonies" because they are a requirement we have, it helps no one, we don't know why we have to do it, but no one wants to question it.

[−] bluepeter 46d ago
Nowadays my writing (and maybe all of ours) has totally devolved into "prompt-ese." Much like days of yore where we all approached Google searches with acrobatic language knowing how to specifically get something done.

Now? I am pushing so much of my writing into prompts into AI where I know the AI will understand me even with lots of typos and run-on sentences... Is that a bad thing? A good thing? I am able to be so much more effective by sheer volume of words, and the precision and grammar is mostly irrelevant. But I am able to insert nuances and sidetracks that ARE passing vital context to AI but may be lost on people. Or at least pre-prompt-writing people.

[−] fleebee 46d ago
You quote this:

> LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too?

Given your endorsement of using LLMs for generating ideas, isn't this the inverse of your thesis? The quote's issue with LLMs is the ideas that came out of them; the prose is the tell. I don't think they'd be happy with LLM generated ideas even if they were handwritten.

I feel like this post is missing the forest for the trees. Writing is thinking alright, but fueling your writing by brainstorming with an LLM waters down the process.

[−] D13Fd 46d ago

> LLMs are useful for research and checking your work.

I have to disagree that it's good for LLMs to do the research, depending on the context.

If by "useful for research" you mean useful for tracking down sources that you, as the writer, digest and consider, then great.

If by "useful for research" you mean that it will fill in your citations for you, that's terrible. That sends a false signal to readers about the credibility of your work. It's critical that the author read and digest the things they are citing to.

[−] godot 46d ago
The way I approach having LLM help with writing documents like this is to have it help me clean up my writing, not write the substance of it.

I tend to do extensive research (that process in itself would involve LLMs too, sure) in a tech plan, a product spec, etc. and usually end up with a really solid idea in my head and like say, five critical key points about this tech plan or product spec that I absolutely must convey in this document.

Then I basically "brain dump" my critical key points (including everything about it, background/reasoning, why this or that way, what's counterintuitive about it, why is this point important, etc.) in pretty messy writing (but hitting all the important talking points) to a LLM prompt, asking it to produce the document I need (be it tech plan, product spec, whatever) and ask it to write it based on my points.

The resulting document has all the important substance on it this way.

If you use LLM to produce documents like this by a way of a prompt like "Write a tech plan for the product feature XYZ I want to build", you're going to get a lot of fluff. No substance, plenty of mistakes, wrong assumptions, etc.

[−] bushido 46d ago
I wrote about something similar this week[0]. Beyond doing your own writing and understanding the outcomes that you want clearly, there is an increasing need for us to write our own docs/tickets as all of these are also the prompts.

Docs written by agents almost always produce mediocre results.

[0] https://news.ycombinator.com/item?id=47579977

[−] radicalriddler 46d ago
I wonder if anyone from HN would be willing to input their opinion into some features I'm building, which is really outputting LLM generated writing.

It's basically automating release notes and sprint summary's from existing systems like Jira and Linear. The target user is a product team, the target reader are business stakeholders who want to validate your existence. I've found this process to be stupidly time consuming for both our delivery manager, and whichever Dev they decide to tap on the shoulder to help contextualize tickets.

I feel like LLM's are a really good _summarizer_ and it can easily highlight if your tickets don't have enough context for actual people, if even an LLM can't write a summary with good enough context.

Idk, maybe it's a sensible usecase because you REALLY don't want novel ideas from the LLM in this case. You want it to tell you 1:1 what you did this sprint based on a list of issues.

[−] Jbird2k 46d ago
I use LLMs for compilation of information sometimes. I’m a teacher and I sometimes use it to hack together a quick worksheet for my students. I see they need some practice with a certain concept and I get the LLM to generate a LaTeX doc which I compile to pdf. I find it can be particularly useful at document creation but it is horrible at writing anything that’s in sentence form. It stinks and is not great at conveying my voice.

I will sometimes write a lesson and have an LLM generate a quiz and give me feedback on my content search for mistakes or unclear content.

I have also used it to help me structure a document. I give it requirements it makes a general outline that I then just fill in with my own words.

I’m still not sure how to approach my students’ uses of an LLM. I am loath to make a hard and fast rule of no LLMs because that’s ridiculous. I want to encourage appropriate use. I don’t know what is appropriate use in the context of a student.

An LLM can be a great learning tool but it also can be a crutch.

[−] ghurtado 46d ago

> The goal of writing is not to have written.

A certain percentage of comments I write on social networks end up being deleted before even clicking post. Sometimes after spending 10 or 15 minutes writing it.

The reasons are many, and I've long suspected I shouldn't feel like I'm throwing my time away when this happens.

Now I have a way to remember why.

[−] Dev-Chris 45d ago
Using LLMs to write defeats the whole purpose. When you write you learn, you see where you had gaps in knowledge and understanding and it prompts you to go back and fill them up. It helps bring out and sharpen your reasoning. Using an LLM to write is like Using a forklift at the gym. You learn nothing.
[−] fraywing 46d ago

>Letting an LLM write for you is like paying somebody to work out for you.

It's worse than this. If someone is working out for you, they still own the outcome of that effort (their physique).

With an LLM people _act_ like the outcome is their own production. The thinking, reasoning, structural capability, modeling, and presentation can all just as easily be framed _as your creation_.

That's why I think we're seeing an inverse relationship between ideation output and coherence (and perhaps unoriginality) and a decline in creative thinking and creativity[0]

[0] https://time.com/7295195/ai-chatgpt-google-learning-school/

[−] visarga 46d ago
The cognitive benefit of writing comes from externalizing and evaluating ideas under friction. LLM conversation provides more friction per unit time than solo drafting because you're constantly reacting to a semi-competent interlocutor who gets it almost-right in ways that force you to articulate exactly where it went wrong.

I checked my logs and I write 10 words in chat for 1 word in LLM output for final text. So it's clearly not making me type less. I used to type about 10K words per month now I type 50-100K words per month (LLM chat is the difference).

The surplus capacity provided by LLMs got reinvested immediately in scope and depth expansion. I did not get to spend 10x less time writing.

[−] gbro3n 46d ago
I fully agree with the sentiment of the article. I will say that I feel I've had some success in having an LLM outline a document, provided that I then go through and read / edit thoroughly. I think there's even an argument that this a) possibly catches areas you I have forgotten to write about, and b) hooks into my critique mode which feels more motivated than author mode sometimes (I'm slightly ashamed to say). This does come at the cost however of not putting my self in 'researcher' mode, where I go back through the system I'm writing about and follow the threads, reacquainting myself and judging my previous decisions.
[−] 6thbit 46d ago
Writing down specs for technical projects is a transformational skill.

I've had projects that seemed tedious or obvious in my head only to realize hidden complexity when trying to put their trivial-ness into written words. It really is a sort of meditation on the problem.

In the most important AI assisted project I've shipped so far I wrote the spec myself first entirely. But feeding it through an LLM feedback loop felt just as transformational, it didn't only help me get an easier to parse document, but helped me understand both the problem and my own solution from multiple angles and allowed me to address gaps early on.

So I'll say: Do your own writing, first.

[−] fallinditch 46d ago
The rational response to document overload is to mostly ignore it.

Workers and managers in organizations are being overwhelmed by large numbers of documents because it's so easy to bang something off that's 'about right' and convincing enough.

But there's still some value in writing documents. I agree with the original article - it's all about thinking. My take on it is this: it's possible to use LLMs to write decent documents so long as you treat the process as a partnership (man and machine), and conduct the process iteratively. Work on it, and yes, think.

[−] instalabsai 46d ago
For the same reason I prefer writing with a keyboard instead of handwriting, I prefer writing with a LLM than manually typing these days. I end up spending the same amount of time rewriting and editing a text than I would have otherwise but instead of worrying about grammatical mistakes or the flow of the text, I spend 100% of my time getting my idea across.

Of course you can be lazy with LLMs and I can usually tell if it’s one—shotted as well, but if you are a good writer, you’ll get 10x out of using an LLM to write.

[−] 2020science 45d ago
I find it depends on context. I write a lot as an academic and author. When I need to generate functional content that has a specific purpose (knowledge base, transfer of information etc) I will use AI where it makes sense. Where I write to explore ideas, develop my own thinking, and connect with others in a very relational way, I intentionally do not use AI. Plus, when I do this, writing is an extension of my identity and I'd rather not give that away!
[−] keybored 46d ago

> The goal of writing is not to have written. It is to have increased your understanding, and then the understanding of those around you. When you are tasked to write something, your job is to go into the murkiness and come out of it with structure and understanding. To conquer the unknown.

Adults now have to be explained, like children, that you can’t just stream info through the eyes and ears and expect to learn anything.

That’s one explanation for this apparent need; there are also more sinister ones.

[−] globular-toast 46d ago
I still don't understand the concept of "using an LLM to write". You have to write your context in. That is your writing! Just send me that!

I wrote about this ages ago. Just send me the prompt! https://blog.gpkb.org/posts/just-send-me-the-prompt/

[−] bboynton97 46d ago
There's a lot of ways to use an LLM, the least effective is automating an entire process- yet it's the most compelling.

To your point, it's entirely a balance. I personally will record a 10-15 minute yap session on a concept I want to share and feed it to an agent to distill it into a series of observations and more compelling concepts. Then you can use this to write your piece.

[−] thisisrobv 46d ago
I think we're beyond this already for most Claude/ChatGPT users they just assume that everything is written by AI because that's what they'd do. Credibility has been lost, but certainly there are many cases where human thinking will improve the final artifact, I think it's enough to focus on improving quality vs some moral high ground.
[−] aussieguy1234 46d ago
Write down the core facts, questions and answers in a very rough around the edges draft. Don't bother spell or grammar checking. But make sure the facts are all there.

Then, ask an LLM to fix up the article, make it look professional and fill in the "fluff". Explicitly tell it to not include facts not already in the document.

Review the document and if its all good, its done.

[−] KolibriFly 45d ago
A lot of the value is in discovering the holes in your own reasoning while trying to make it legible to someone else
[−] TrianguloY 46d ago

> Letting an LLM write for you is like paying somebody to work out for you.

This. This is the big distinction. If you like something and/or want to improve it, you do it yourself. If not, you pay someone else to do it. And I think that's ok.

But I guess some people either choose a wrong job or had no other option. I'm happy to not be in that group.

[−] mrroryflint 46d ago
I recently added an AI Policy to my blog (https://rory.codes/ai-policy). The assumption now appears to be AI guilty unless proven innocent which I think is a travesty for readers and writers alike.
[−] AYBABTME 46d ago
This reduces writing to one concept: thinking and the writing is just a byproduct. But writing is also presentation and also communication.

There is nothing wrong with speechwriters. Various authors spilled out their thoughts in rough format and had writers turn them into better structured, prosed and understandable projections. Hand writing each sentence that is presented as an end-product to the reader doesn't solve that problem.

Forcibly coupling the two is an arbitrary choice that may be a valid tradeoff for some and not so for others, and not so for _all_ writing.

I'm not good at looping through a document with proper english prose. My writing is raw, particular, and I gloss over a lot of details. LLMs help me turn my shitty extensive notes in bad grammar and syntax, into shareable and understandable artifacts. They help me turn more of my thoughts into ingestable communication by others. Without AI, I communicate less of my thoughts due to friction. My thoughts are formed and authored and written, but not in a format consumable by anyone else.

Ebikes help older riders keep riding.

[−] zhoujing204 46d ago
I am not a native English speaker, I can do reading fine, but writing is a big trouble for me, especially formal writing and academic writing, AI can do help me writing better, of course I'll review what AI generated.(Above all are not written by AI obviously.)
[−] codexb 46d ago
I find LLM's particularly good at filtering and distilling a large rambling idea that I have into a well-formatted and coherent paragraph, and also removing any statements that would be perceived as overly argumentative or rude.

In essense, LLM's are a much better spell check.

[−] drnick1 46d ago

> They are particularly good at generating ideas.

I think it's the opposite. People have ideas and know what they want to do. If I need to write something, I provide some bullet points and instructions, and Claude does the rest. I then review, and iterate.

[−] theAurenVale 45d ago
this maps to what im seeing in visual AI too. rendering quality is insane now, nobody disputes that. but most AI generated images look generic and flat because theres no direction behind them

same thing with writing imo. the output quality is technically fine but if you didnt wrestle with the ideas yourself the result reads like noone actually thought about it

writing forces you to confront where your thinking is vague. directing a photo shoot does the same thing actualy, the moment you have to commit to a specific angle or framing you discover what you realy want to say. skip that step and you get competent emptiness

[−] tombert 46d ago
My blog is 100% written by me. You can tell because of all the typos.

I don't really understand why people will create blogs that are generated by Claude or ChatGPT. You don't have to have a blog, isn't the point of something like a blog to be your writing? If I wanted an opinion from ChatGPT I could just ask ChatGPT for an opinion. The whole point of a blog, in my mind, is that there's a human who has something that they want to say. Even if you have original ideas, if you have ChatGPT write the core article makes it feel kind of inhuman.

I'm more forgiving of stuff like Grammarly, because typos are annoying, though I've stopped using it because I found I didn't agree with a lot of its edits.

I admit that I will use Claude to bullshit ideas back and forth, basically as a more intelligent "rubber duck", but the writing is always me.

[−] balderdash 45d ago
I do my own writing, but i can go so much faster really just writing all of my thoughts down and organizing them a bit and using llms to really just clean up and better organize my thinking.
[−] vicchenai 46d ago
the gym analogy lands. you dont hire someone to do your reps, but its fine to hire a trainer to critique your form. that distinction matters when thinking about how to actually use these tools.