I am definitely missing the pre-AI writing era (lesswrong.com)

by joozio 240 comments 322 points
Read article View on HN

240 comments

[−] aledevv 47d ago
I want to emphasize a thought you expressed:

> "..but maybe it's a good thing that most of us don't allow this technology to reframe our thoughts."

No, you're not the only one experiencing this: I too had the same concerns as you: with every new thought, every new creation, I had to ask the AI's opinion, as if I were no longer able to judge, to decide, without consulting the AI (...just to be safe, you never know...).

The only way to regain your creative ability is to write down your thoughts yourself, read, reread, rewrite, correct, express your opinion...

What AI can't do is convey emotions.

[−] krackers 46d ago

>as if I were no longer able to judge, to decide, without consulting the AI

"the Whispering Earring" – https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...

[−] Amekedl 47d ago
depending how hard the "the brain is a muscle" saying applies, there is no way using LLMs/chatbot systems/AI is not going to deteriorate your brain immensely.
[−] john_strinlai 46d ago
when i was younger, we didnt have cellphones. i had ~20-30 phone numbers memorized, at least. i also used to remember my credit card number. my brain has not deteriorated now that i have offloaded that to my phone.

point being: it depends on how you use it. if you offload critical thinking to ai, you will probably (slowly) atrophy your critical thinking muscles. if you offload some bullshit boilerplate or repetitive tasks or whatever, giving you more time overall to do the critical thinking part, you will be fine.

[−] sumeno 46d ago

> my brain has not deteriorated now that i have offloaded that to my phone.

Is there empirical evidence that you haven't? You wouldn't necessarily be the best judge

[−] ghywertelling 46d ago
In I,Robot, Will Smith prefers to drive himself because he doesn't trust AI. But we are moving towards self driving as it would be more safer. Would you trust a calculation more if it was done by hand using log tables? Having vehicles allowed us to create sports like dirt bike riding or monster truck racing. Yes something is lost but something is also gained. We move up the layer of abstraction.
[−] barbazoo 46d ago
If your body is in good shape, stopping exercise won't make you deteriorate that quickly. What I wonder is, will people get in good shape in the first place.

What I mean is as someone with lots of experience, I don't care about me not learning about the basics anymore as much as someone in their 20s and 30s maybe should.

[−] justonceokay 46d ago
See the recent article suggesting use of navigation apps may correlate in populations to increased Alzheimer’s. Will it happen to you? Maybe, maybe not. Life’s a box of chocolates!
[−] stavros 47d ago
A friend described it as "there's no blank page any more".
[−] sporkland 43d ago
I remember living pre-internet and post-internet, especially post-google and feeling like my own memory was being replaced with an Ethernet cable. The current AI models are definitely carving even more of my brain off, the only thing I'm unsure of is if I'm a better or worse cyborg at each stage. Like even with facts and data at my finger tips I still had to process decisions. I'm wondering what my bio brain's role will be as LLM's progress.
[−] dhosek 46d ago
I’ve completely avoided using AI for writing (although it looks like my coding avoidance is coming to an end). As someone who kind of views using a thesaurus as “cheating”¹, using AI to do the writing is way beyond the pale. A lot of what writing is about for me is about discovering and distilling and figuring out what I think. Take that away and I might as well just the spend the day watching television and playing video games and getting dumber by the minute.

I would go a step further, in fact, and when I’m writing something creative, I may choose to avoid whatever the autocomplete is suggesting as the next word (although I have it disabled in most contexts). People have a tendency to fall into grooves in their writing/speaking and this kind of acts as a reminder to not do that,³ although I’m far from immune myself (looking at my comment history, it’s upsetting to see the same verbal tics repeated when I have something to say).

1. If you don’t know a word well enough for it to come to mind when you’re looking for a word for something, you may not know it well enough to use it in your writing.²

2. Cue the people who will disagree. Suffice it to say that I occasionally will use a thesaurus to pull up a word that’s just out of reach, especially as my brain gets older and weaker, but even that I try to avoid.

3. When I got my MFA, there was a visiting writer who had published a creative writing book which was largely based on his former students’ transcriptions of his lectures. During the lecture he gave, even though he was speaking extemporaneously, he would speak word-for-word whole paragraphs from the book.

[−] viccis 46d ago

>Although 80 % of the content was my own writing, the fact that it was run in a LLM enginee for grammar and vocabulary cross-check, made it failed the "probable written by AI " metric; and it was rejected.

should be:

>Although 80% of the content was my own writing, the fact that it was run through an LLM engine for grammar and vocabulary cross-checking meant that it failed the "probably written by AI" metric, and it was rejected.

  1. 80 % -> 80%
  2. in -> through
  3. a LLM -> an LLM
  4. enginee -> engine
  5. cross-check -> cross-checking
  6. cross-checking, -> cross-checking (removed the comma)
  7. made it failed -> meant that it failed, (or "made it fail" depending on whether you want to preserve the past tense or preserve the word "made")
  8. probable -> probably
  9. by AI " -> by AI"
  10. ; and it was -> , and it was (no need for a semicolon when linking with a conjunction like "and", and I would consider another word or phrase such as ", and, as a result, it was rejected" to emphasize the causal relationship between the clauses)
That's ten corrections that are fixing straightforward typos and/or grammar and vocab mistakes in one sentence. Most are fairly objective, though I can understand different opinions on 2, 7, or maybe 10.

Relying on AI for editing seems to have atrophied the author's writing if that is what he or she thinks is worth publishing on a blog like this. I would suggest practicing editing your own work and not even thinking about passing it through AI (especially when you were told not to use any AI!) to edit for a while. Given that English is not your first (or even second or third) language, I would also suggest having a native speaker with some demonstrable writing skill review your writing and give feedback on how to make it more idiomatic. For example, writing being "run through an LLM" rather than "run in an LLM" is a relatively subtle difference compared to the others, and it's very very common for preposition mistakes like this to show up when writing in another language than your first. I am still hopeless with French prepositions.

[−] everdrive 47d ago
Not joking, buy and read books. Old books are only written by people. (and the help of an editor)
[−] piker 46d ago
AI for editing is garbage. Chat to it to get ideas maybe, but in its current incarnation it’s just going to degrade anything you filter through it.
[−] hectdev 46d ago
I work mostly on the tech side of things but my corporate limitation has always been writing up documentation, communicating/translating to stakeholders, and recalling everything relevant when writing PR descriptions. AI has been a breath of fresh air. I actually communicate more information efficiently than I would have ever put the effort into before. I still maintain my own writing for more casual things like social media (HN included) and low stakes Slack conversations but AI for getting across ideas and then proofreading it is great.
[−] Morromist 46d ago
"I actually communicate more information efficiently than I would have ever put the effort into before"

- this is subjective and evidence seems to point to the opposite in my view. In reality most people who think they communicate better with AI don't actually read what the AI has written for them and just puke it out on the world, expecting their readers to do the work.

The Ai almost always writes boring, repetitive garbage and very, very often includes redundant information. But saying it creates more efficiant communication is a great excuse for being sloppy and lazy.

[−] fluidcruft 46d ago
I have had the same experience, personally. i.e. asking Claude to simplify things for c-suite has gotten (1) extremely positive feedback from c-suite and (2) actually relevant conversation about decisions. It's certainly not a one-shot but iteration with Claude is so fast that it takes just a few hours vs plotting weeks about how to clarify technical decisions. But Intend to work in a "try it this way" sort of iteration where I need to rewrite things and see what they look like. But using Claude/ChatGPT for options about whether things make sense is very helpful (for me). The speed of iteration is great.
[−] Chinjut 45d ago
The C-suite are barely human, so robot-speak appeals to them.
[−] hectdev 45d ago
Which one is it? Subjective or evidence based? I'm sharing what I know is true for my experience as well as the fact that I proofread what I send with AI and am aware of how terse I usually am.
[−] wincy 46d ago
I was asked to write user stories about a complex topic where I’m the SME at work. I spent two hours info dumping everything I knew about the project, everything the AI wouldn’t have any context for, using Cursor to add related projects to the workspace, tagging specific files where we’d implemented similar things with our styles, noted all the quirks of the system and how it works and where to find relevant information. I spent a lot of time on it, and then asked it to reach out using cli to grab relevant information from our infra, and write stories about how we’d accomplish everything I intend to get done. I then spent another few hours reviewing the 45 or so stories that conversation generated. It was similar to how I’d talk to a new contractor I’m onboarding to work on the work.

I have a deep knowledge of the information, have done the process we’re doing on two previous projects, but organizing all the stories would have been an absolute nightmare. I still spent half a day on this, I’d guess the fatigue from the boring parts would have made this take a week or maybe two, just because I was doing the parts I enjoy (knowing things and describing them) and I was able to offload the parts I’m not great at (using a lot of boilerplate language to organize the info I knew into scrum stories). Then I had a meeting, reviewed the stories with my coworkers, we had a discussion, deleted two or three of them that we determined weren’t necessary, and fixed up one or two where I’d provided insufficient information about some context surrounding coloring of a page.

It burned through a ton of Opus 4.6 tokens, looked through a ton of code (mostly that I’d written, pre-LLM), but has been amazing for helping me move into a lead position where grooming stories and being organized has always been my weakest point.

Also, when I wrote a postmortem for a deploy that had some issues, I wrote it all by hand. You have to know when the tools help and when they will hinder.

[−] ArcHound 46d ago
I thought it's quite good. Of course, I'm not taking 100% of output, but it takes care of my grammar blindspots (damn you commas and a/an/the articles!).

Can you please share what and how gets degraded? Sometimes I don't like a phrase it selects, but it's not common

[−] georgemcbay 46d ago

> it takes care of my grammar blindspots (damn you commas and a/an/the articles!)

There are plenty of pre-LLM tools that can fix grammar issues.

> Can you please share what and how gets degraded?

I'm not the person you asked, but IMO LLMs suck the style and voice out of the written word. It is the verbal equivalent of photos that show you an average of what people look like, see for example:

https://www.artfido.com/this-is-what-the-average-person-look...

As definitionally average the results are not bad but they are also entirely unremarkable, bland, milquetoast. Whether or not this result is a degradation will vary, of course, as some people write a lot worse than bland.

[−] piker 46d ago
Well, for one example, it inhibits your desire to improve against those very blind spots. In exchange for that your audience gets 3-4x length normalized bullshit to read instead.
[−] unyttigfjelltol 46d ago
AI can take a rough draft, clean it up and shorten it as much as you want. The suggestions very often expose ambiguities in the original text. If you think the LLM got it wrong, it’s nearly often the LLM overreading some feature of the original that you failed to catch, which is precisely what you’d want out of your proofreader.

Yes, LLMs reduce the individual charm of prose, but the critique itself carries a romantic notion that we all loved the idiosyncratic failures of convention and meaning which went into highly identifiable personal styles, and which often go missing from LLM-edited work.

[−] shagie 46d ago

> Well, for one example, it inhibits your desire to improve against those very blind spots.

I'd contend this is not true. Even professional authors go to an editor who identifies things that need to be fixed. As the author of the text and knowing what it should be, it can be difficult to read what you wrote to find those mistakes.

> In exchange for that your audience gets 3-4x length normalized bullshit to read instead.

This is not at all what is implied by having an AI act as an editor. Identifying misplaced commas, incorrect subject verb agreement (e.g. counts), and incomplete ideas left in as sentence fragments.

You appear to be implying that the author is giving agency to create the content to the AI rather than using it as a tool to act as a super-charged grammerly.

[−] piker 46d ago

> Even professional authors go to an editor who identifies things that need to be fixed.

Yes, and these people are good at it. What’s your point?

If you need grammar checking, there are thousands of apps including word processors, web browsers and even most mobile devices that will check your inputs for grammar and spelling mistakes as you type. All of that without burning down the rainforests or neutering your thesis.

[−] SpicyLemonZest 46d ago
In many kinds of writing, perhaps most, communicating your state of mind to the reader is a primary goal. Even a smart LLM fundamentally degrades this, because to whatever degree that it has a mind it isn't shaped like yours or mine. I've had a number of experiences this year where I get to the end of a grammatical, well-structured technical document, only to find that it was completely useless because it recited a bunch of facts and analyses but failed to convey what the author was thinking as they wrote it.

(Of course, that may well be exactly what you're looking for if you're writing an audit report or something.)

[−] viccis 46d ago

>damn you commas and a/an/the articles

This sounds like an ESL issue. LLMs are good at proof reading ESL-written English text. They are not as good at proof reading experience English writers.

[−] pizzly 46d ago
AI for editing is good and have many useful cases. The part where it fails is that the tone/style of the writing gets overtaken and reads like all other AI edited writing. But the quality of the edit is good, its just not in your style. When everyone sounds the same then there is no uniqueness. But using it edit legal letters, software documentation etc are very good use cases, using it to explain your ideas in a blog not so much.
[−] xdennis 46d ago
Depends on how you use it. If you say "reword this to sound " then it does suck. But if you say "This is what I wrote. My intention is so-and-so. The audience is ... Please mention and add suggestions for how to fix typos, poor wording, unclear expression, etc.

Then you get back what it thinks is wrong and you're in charge of editing in its suggestions. If you let it edit for you you're more likely to just create slop.

---

Here's an example. My actual text is:

> I want to make it clear that I'm not hunting for things to be angry at, these are issues I've encountered in actual codebases.

If go through the route of prompting it to re-write, it changes it to:

> I’m not looking for things to be angry about—these are issues I’ve encountered in real codebases.

The em dash is a clear give away that it's AI, but it's also soulless.

If I ask it to tell me what's possibly wrong about it I get that there's a comma splice (never knew the term, I'm not a native speaker) and "about" is better than "at". So I do a minor change:

> I want to make it clear that I'm not hunting for things to be angry about. These are issues I've encountered in actual codebases.

[−] skywhopper 47d ago
It’s largely a problem of how these tools are packaged, but while it’s certainly nice to have an LLM check your spelling, or review your grammar or style or usage, you should never allow them to actually edit your document directly.

First of all, they will make substantive changes you didn’t intend. The meaning will get changed, errors will be introduced. Tone will be off, and as the author says, your voice will disappear. There is no single “correct” way to write something. And voice and tone are conveyed with grammatical and usage variation. Don’t give that up to a robotic average.

Secondly, you will never improve, or even maintain, your own writing skills if you don’t actively engage with the suggested changes. You also won’t fully realize half the purpose of writing, which is to understand the topic better yourself. Doing the work of editing your piece will help you understand the subject even better. If you just let the machine “fix” your errors, you’ll become a worse writer and less of an expert over time.

[−] solomonb 46d ago
I actually find Gmail a better editor/grammar check then LLMs. It makes isolated simplifications/corrections that imo have minimal style impact and just focus on clarifying phrasing.
[−] boca_honey 46d ago

>I just wrote what my brain is instructing to type (might not reread it before posting)

Why would I put effort into reading something that had no effort put in by the author?

This guy needs an editor, AI or otherwise.

[−] radimm 47d ago
This is exactly same struggle for me. Writing technical content about PostgreSQL and balancing my voice without sounding like LLM written is genuinely difficult.

As English is not my first language, I do run into problem where the line between fix my clumsy sentence and rewrite my thought is very thin. Same with writing "boring" technical explanation and more approachable content. I'm getting pushed back for both.

[−] aidenn0 46d ago
What does it say about me that when I run my writing through one of those "detect if AI" tools I seldom see a value of less than 70% confidence that the writing was AI generated?
[−] epolanski 47d ago
I think that AI will accelerate an already existing trend that pre dates AI meaning the global regression to the mean we're seeing in any creative field, from design to videogames, from cars to fashion.
[−] beej71 47d ago
I feel like asking it to polish or rewrite is going too far. Using it for a grammar/spell checker or thesaurus is fine, though. At least that preserves ones voice.

And I've definitely used it when I can't remember that one stinking word that I know exists and is perfect for this occasion.

[−] ForgotMyUUID 46d ago
After COVID six years ago I kind of lost my ability to write concisely and clear. I always loved to compose, to fantasize, but now I feel like an impared one. Now writing any text is a painful process to me: I grab one paper, do freewriting (when one writes w/o stop everything that comes to ones mind) , then mark the bulletpoints and nice formulations (if present, of course). Sometimes, when I want to sharpen the text, I ask questions to it, I critisize it violently. Then I close the original, and rewrite everything from scratch by hand. Handwriting enforces human, naturally lazy creatures, to be concise in their formulations. After 3-4 iterations, I get a text of asatisfactory quality.

It is very unfortunate that we start value creativity and imagination only when we lose them. Although a good pill for creativity in my case is ... boredom and routine. I cant stand doing the same thing again and again in the same fashion. Maybe you might give it a try :)

[−] malwrar 46d ago

> And for people who successfully taken back their creative writing skills, how did you do it?

“AI is one possible reference for my actual writing”. Generate info and perspectives, but only ever write stuff yourself. Something about this for me forces me to stay in my own “”writing voice”, at least personally, for the various places I use AI tech in. I think of the tech as a chess engine; they are better than any human player but I use them to help me gain perspective rather than cheat. Otherwise, why bother playing chess?

[−] heavyset_go 47d ago
If you outsource your thinking and skills, your ability to do either atrophies. You'll become dependent on outsourcing for both.

You're trading ability and competence for convenience.

[−] thepasch 47d ago
I never use an LLM to paraphrase my own voice as a matter of principle, but I’ve still been repeatedly accused of doing so because I happen to always have written structured posts, used “smart quotes,” and done that negative comparison thing (it’s genuinely not just fluff, it’s a genuinely useful way to— ah god damn it). Sigh.
[−] keiferski 47d ago
I have been writing stuff for a long time; my first internet experience was posting on forums about a Gameboy Advance game. Then in other forums, for a philosophy degree, and professionally as a copywriter and technical writer. I’ve been meaning to write up a post of my thoughts on writing and AI, but there things I’ve been thinking recently are:

1. There was a lot of slop pre-AI. In fact I’d say the majority of published writing was bad, formulaic, and just written to manipulate your emotions. So in some sense, I don’t really think pre-AI slop had more value. It’s just cheaper to make now.

2. AI has prompted me to study more off-beat writers that followed the rules of language a little less frequently. This includes a lot of people from circa 1890-1970, when experimenting with form was really in vogue.

3. Which brings me to my third point, which is that no matter how much the AI actually knows about writing, the person prompting it is limited by their own education and knowledge of writers. You can’t say, “make me a post in the style of Burroughs” if you don’t know who Burroughs was, or what his writing style was. So in a sense there is an increased importance to being educated about writing itself. Without it you’re limited in your ability to use AIs to write stuff and in your awareness of how much your non-AI written work is influenced by AI writing.

[−] stabbles 47d ago
Are grammatical errors and typos fashionable now? Reading this post it seems the anti-thesis in the LLM era is not to edit at all, but rather write down a stream of consciousness to make it "personal".
[−] nikitadotla 46d ago
I am not a native speaker, for anything like HN comments I don't use AI, but I see no harm in using AI in correcting grammar and maybe some wording, but the ultimate change shouldn't be a copy-paste replacement, it should be well thought through by the author.
[−] Ancalagon 46d ago
I am definitely missing the pre-AI w̶r̶i̶t̶i̶n̶g̶ era
[−] _the_inflator 46d ago
The funny thing I recognized is that I don’t care posting what AI created. It is something someone did and if it meets certain criteria I post it - but extremely rarely and not on HN.

On the other hand I am way stricter and harder on myself when writing.

This is something I observed for example.

I don’t use AI for writing. Since I mainly read classics regarding belletristic I don’t fear being served AI generated content.

I still don’t see why or even how to write with AI creating large bodies of text like a book for example.

It is like ghost writing. In the best case it is good one, but style changes due to LLM model changes can kill a book because the tone suddenly is a totally different one.

[−] vicchenai 46d ago
the typos-as-authenticity thing is kind of funny because AI can just be told to write with typos. the real signal was never the errors, it was always whether the ideas feel like someone actually thought them.