> "..but maybe it's a good thing that most of us don't allow this technology to reframe our thoughts."
No, you're not the only one experiencing this: I too had the same concerns as you: with every new thought, every new creation, I had to ask the AI's opinion, as if I were no longer able to judge, to decide, without consulting the AI (...just to be safe, you never know...).
The only way to regain your creative ability is to write down your thoughts yourself, read, reread, rewrite, correct, express your opinion...
depending how hard the "the brain is a muscle" saying applies, there is no way using LLMs/chatbot systems/AI is not going to deteriorate your brain immensely.
when i was younger, we didnt have cellphones. i had ~20-30 phone numbers memorized, at least. i also used to remember my credit card number. my brain has not deteriorated now that i have offloaded that to my phone.
point being: it depends on how you use it. if you offload critical thinking to ai, you will probably (slowly) atrophy your critical thinking muscles. if you offload some bullshit boilerplate or repetitive tasks or whatever, giving you more time overall to do the critical thinking part, you will be fine.
In I,Robot, Will Smith prefers to drive himself because he doesn't trust AI. But we are moving towards self driving as it would be more safer. Would you trust a calculation more if it was done by hand using log tables? Having vehicles allowed us to create sports like dirt bike riding or monster truck racing. Yes something is lost but something is also gained. We move up the layer of abstraction.
If your body is in good shape, stopping exercise won't make you deteriorate that quickly. What I wonder is, will people get in good shape in the first place.
What I mean is as someone with lots of experience, I don't care about me not learning about the basics anymore as much as someone in their 20s and 30s maybe should.
See the recent article suggesting use of navigation apps may correlate in populations to increased Alzheimer’s. Will it happen to you? Maybe, maybe not. Life’s a box of chocolates!
I remember living pre-internet and post-internet, especially post-google and feeling like my own memory was being replaced with an Ethernet cable. The current AI models are definitely carving even more of my brain off, the only thing I'm unsure of is if I'm a better or worse cyborg at each stage. Like even with facts and data at my finger tips I still had to process decisions. I'm wondering what my bio brain's role will be as LLM's progress.
I’ve completely avoided using AI for writing (although it looks like my coding avoidance is coming to an end). As someone who kind of views using a thesaurus as “cheating”¹, using AI to do the writing is way beyond the pale. A lot of what writing is about for me is about discovering and distilling and figuring out what I think. Take that away and I might as well just the spend the day watching television and playing video games and getting dumber by the minute.
I would go a step further, in fact, and when I’m writing something creative, I may choose to avoid whatever the autocomplete is suggesting as the next word (although I have it disabled in most contexts). People have a tendency to fall into grooves in their writing/speaking and this kind of acts as a reminder to not do that,³ although I’m far from immune myself (looking at my comment history, it’s upsetting to see the same verbal tics repeated when I have something to say).
⸻
1. If you don’t know a word well enough for it to come to mind when you’re looking for a word for something, you may not know it well enough to use it in your writing.²
2. Cue the people who will disagree. Suffice it to say that I occasionally will use a thesaurus to pull up a word that’s just out of reach, especially as my brain gets older and weaker, but even that I try to avoid.
3. When I got my MFA, there was a visiting writer who had published a creative writing book which was largely based on his former students’ transcriptions of his lectures. During the lecture he gave, even though he was speaking extemporaneously, he would speak word-for-word whole paragraphs from the book.
>Although 80 % of the content was my own writing, the fact that it was run in a LLM enginee for grammar and vocabulary cross-check, made it failed the "probable written by AI " metric; and it was rejected.
should be:
>Although 80% of the content was my own writing, the fact that it was run through an LLM engine for grammar and vocabulary cross-checking meant that it failed the "probably written by AI" metric, and it was rejected.
1. 80 % -> 80%
2. in -> through
3. a LLM -> an LLM
4. enginee -> engine
5. cross-check -> cross-checking
6. cross-checking, -> cross-checking (removed the comma)
7. made it failed -> meant that it failed, (or "made it fail" depending on whether you want to preserve the past tense or preserve the word "made")
8. probable -> probably
9. by AI " -> by AI"
10. ; and it was -> , and it was (no need for a semicolon when linking with a conjunction like "and", and I would consider another word or phrase such as ", and, as a result, it was rejected" to emphasize the causal relationship between the clauses)
That's ten corrections that are fixing straightforward typos and/or grammar and vocab mistakes in one sentence. Most are fairly objective, though I can understand different opinions on 2, 7, or maybe 10.
Relying on AI for editing seems to have atrophied the author's writing if that is what he or she thinks is worth publishing on a blog like this. I would suggest practicing editing your own work and not even thinking about passing it through AI (especially when you were told not to use any AI!) to edit for a while. Given that English is not your first (or even second or third) language, I would also suggest having a native speaker with some demonstrable writing skill review your writing and give feedback on how to make it more idiomatic. For example, writing being "run through an LLM" rather than "run in an LLM" is a relatively subtle difference compared to the others, and it's very very common for preposition mistakes like this to show up when writing in another language than your first. I am still hopeless with French prepositions.
It’s largely a problem of how these tools are packaged, but while it’s certainly nice to have an LLM check your spelling, or review your grammar or style or usage, you should never allow them to actually edit your document directly.
First of all, they will make substantive changes you didn’t intend. The meaning will get changed, errors will be introduced. Tone will be off, and as the author says, your voice will disappear. There is no single “correct” way to write something. And voice and tone are conveyed with grammatical and usage variation. Don’t give that up to a robotic average.
Secondly, you will never improve, or even maintain, your own writing skills if you don’t actively engage with the suggested changes. You also won’t fully realize half the purpose of writing, which is to understand the topic better yourself. Doing the work of editing your piece will help you understand the subject even better. If you just let the machine “fix” your errors, you’ll become a worse writer and less of an expert over time.
I actually find Gmail a better editor/grammar check then LLMs. It makes isolated simplifications/corrections that imo have minimal style impact and just focus on clarifying phrasing.
This is exactly same struggle for me. Writing technical content about PostgreSQL and balancing my voice without sounding like LLM written is genuinely difficult.
As English is not my first language, I do run into problem where the line between fix my clumsy sentence and rewrite my thought is very thin. Same with writing "boring" technical explanation and more approachable content. I'm getting pushed back for both.
What does it say about me that when I run my writing through one of those "detect if AI" tools I seldom see a value of less than 70% confidence that the writing was AI generated?
I think that AI will accelerate an already existing trend that pre dates AI meaning the global regression to the mean we're seeing in any creative field, from design to videogames, from cars to fashion.
I feel like asking it to polish or rewrite is going too far. Using it for a grammar/spell checker or thesaurus is fine, though. At least that preserves ones voice.
And I've definitely used it when I can't remember that one stinking word that I know exists and is perfect for this occasion.
After COVID six years ago I kind of lost my ability to write concisely and clear. I always loved to compose, to fantasize, but now I feel like an impared one. Now writing any text is a painful process to me: I grab one paper, do freewriting (when one writes w/o stop everything that comes to ones mind) , then mark the bulletpoints and nice formulations (if present, of course). Sometimes, when I want to sharpen the text, I ask questions to it, I critisize it violently. Then I close the original, and rewrite everything from scratch by hand. Handwriting enforces human, naturally lazy creatures, to be concise in their formulations. After 3-4 iterations, I get a text of asatisfactory quality.
It is very unfortunate that we start value creativity and imagination only when we lose them. Although a good pill for creativity in my case is ... boredom and routine. I cant stand doing the same thing again and again in the same fashion. Maybe you might give it a try :)
> And for people who successfully taken back their creative writing skills, how did you do it?
“AI is one possible reference for my actual writing”. Generate info and perspectives, but only ever write stuff yourself. Something about this for me forces me to stay in my own “”writing voice”, at least personally, for the various places I use AI tech in. I think of the tech as a chess engine; they are better than any human player but I use them to help me gain perspective rather than cheat. Otherwise, why bother playing chess?
I never use an LLM to paraphrase my own voice as a matter of principle, but I’ve still been repeatedly accused of doing so because I happen to always have written structured posts, used “smart quotes,” and done that negative comparison thing (it’s genuinely not just fluff, it’s a genuinely useful way to— ah god damn it). Sigh.
I have been writing stuff for a long time; my first internet experience was posting on forums about a Gameboy Advance game. Then in other forums, for a philosophy degree, and professionally as a copywriter and technical writer. I’ve been meaning to write up a post of my thoughts on writing and AI, but there things I’ve been thinking recently are:
1. There was a lot of slop pre-AI. In fact I’d say the majority of published writing was bad, formulaic, and just written to manipulate your emotions. So in some sense, I don’t really think pre-AI slop had more value. It’s just cheaper to make now.
2. AI has prompted me to study more off-beat writers that followed the rules of language a little less frequently. This includes a lot of people from circa 1890-1970, when experimenting with form was really in vogue.
3. Which brings me to my third point, which is that no matter how much the AI actually knows about writing, the person prompting it is limited by their own education and knowledge of writers. You can’t say, “make me a post in the style of Burroughs” if you don’t know who Burroughs was, or what his writing style was. So in a sense there is an increased importance to being educated about writing itself. Without it you’re limited in your ability to use AIs to write stuff and in your awareness of how much your non-AI written work is influenced by AI writing.
Are grammatical errors and typos fashionable now? Reading this post it seems the anti-thesis in the LLM era is not to edit at all, but rather write down a stream of consciousness to make it "personal".
I am not a native speaker, for anything like HN comments I don't use AI, but I see no harm in using AI in correcting grammar and maybe some wording, but the ultimate change shouldn't be a copy-paste replacement, it should be well thought through by the author.
The funny thing I recognized is that I don’t care posting what AI created. It is something someone did and if it meets certain criteria I post it - but extremely rarely and not on HN.
On the other hand I am way stricter and harder on myself when writing.
This is something I observed for example.
I don’t use AI for writing. Since I mainly read classics regarding belletristic I don’t fear being served AI generated content.
I still don’t see why or even how to write with AI creating large bodies of text like a book for example.
It is like ghost writing. In the best case it is good one, but style changes due to LLM model changes can kill a book because the tone suddenly is a totally different one.
the typos-as-authenticity thing is kind of funny because AI can just be told to write with typos. the real signal was never the errors, it was always whether the ideas feel like someone actually thought them.
I miss the text only reading era. This is a blog and should not need to have JavaScript enabled to render text to a page. I would rather not have to be annoyed by flavor of the month duplicate scroll bars, cookie banners, newsletter pop-ups 5 seconds in, scroll to the top pop-ups, idle overlays, highlight helper bars that break copy paste, etc. This blog didn't have all of those but had some. I'm sure the metrics look great because I had to load this page four times. First initially, and then disabling JavaScript and realizing it doesn't load anything at all. A third time re-enabling JavaScript and then deleting all the annoying elements, and then a fourth time to make sure my cosmetic filter is applied correctly. 4x the interactions! Must be doing something right.
So much content is just straight copy/pasted from the LLM now. Articles, blog posts, linked in posts, reddit comments, etc. Even just using the LLM for 'editing' tends to shift the voice to an obvious LLM voice when used naively. It is getting worse too. Last week a co-worker sent me a screenshot of Claude for me to review their "work", which was just whatever Claude made up.
Usually, if something is very obviously unfiltered LLM output, I just stop reading.
I do use LLMs for writing myself. They are useful, but are poor authors.
Every now and then when I'm reading something, the writer will use a turn of phrase, a specific word, a metaphor, etc, that is unusually clever, or allows me to see the concept in some obtuse light. Or even, they are just able to choose the right words to make something sound musical or rhythmic in some pleasant way. It's intellectually delightful to come across these in writing.
I've never been surprised at AI writing. Emotion the biggest part of communication and these grey boxes have none.
I didn't expect AI to write 95%+ of my code, but here we are.
I can't say whether I feel worried or not. I am trying now to gamify manual coding by having to review and edit a random file from my work codebases a day and still do occasional leetcodes and katas.
But overall I enjoy coding less for sure. I can't think of the last time I spent heavy time focusing on a refactor or lower level design abstractions.
I don't think I will be still coding 5 years from now. The joy is just not there.
Once I think something is AI I just can’t read it anymore. It isn’t out of principle or anything, I just become so distracted by the idea that I can’t focus or derive any benefit or pleasure from continuing.
I've been a Grammarly customer for quite some time, and I have tried the AI suggestions, but it always loses something and ends up with a whiny, apologetic tone.
I'd push back on the author and ask him really if his writing is getting worse or his standards have increased, leading to undue stress that might throw the flow state off.
Relevant to the headline, though less relevant to the actual article, I miss the pre-AI youtube era. I search for educational videos for my kids and videos that are from the last couple years are likely AI slop. Some are pure slop, just terrible. Others are also slop (no visible narrator, pronunciation mistakes an AI would make, repeating stock footage) but are actually well done enough to be educational. However, I don't want my kids watching these even if they are educational because it could lead them to think that the style of video is not bad. I want them to have an attuned radar to that kind of junk.
For now, I just keep scrolling until I find something from before 2020, which is much more likely to be purely human-made and edited.
I am sorry but perhaps some use of AI or grammar-check would help? A lawn that's not overly manicured has its charm, but if it has one too many barren patches of clumps of overgrown grass, it doesn't appeal as much? This essay feels a bit like that.
This writing is terrible. I can't read it. But are people really unable to write without wanting to put it into an LLM? I haven't done a single piece of natural language writing with an LLM. The thought has never even crossed my mind. Why would I? Surely to give the LLM context of whatever I want to write about would amount to, you know, writing it down? Just write that "prompt" in your blog and send it. No need for LLMs.
240 comments
> "..but maybe it's a good thing that most of us don't allow this technology to reframe our thoughts."
No, you're not the only one experiencing this: I too had the same concerns as you: with every new thought, every new creation, I had to ask the AI's opinion, as if I were no longer able to judge, to decide, without consulting the AI (...just to be safe, you never know...).
The only way to regain your creative ability is to write down your thoughts yourself, read, reread, rewrite, correct, express your opinion...
What AI can't do is convey emotions.
>as if I were no longer able to judge, to decide, without consulting the AI
"the Whispering Earring" – https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...
point being: it depends on how you use it. if you offload critical thinking to ai, you will probably (slowly) atrophy your critical thinking muscles. if you offload some bullshit boilerplate or repetitive tasks or whatever, giving you more time overall to do the critical thinking part, you will be fine.
> my brain has not deteriorated now that i have offloaded that to my phone.
Is there empirical evidence that you haven't? You wouldn't necessarily be the best judge
What I mean is as someone with lots of experience, I don't care about me not learning about the basics anymore as much as someone in their 20s and 30s maybe should.
I would go a step further, in fact, and when I’m writing something creative, I may choose to avoid whatever the autocomplete is suggesting as the next word (although I have it disabled in most contexts). People have a tendency to fall into grooves in their writing/speaking and this kind of acts as a reminder to not do that,³ although I’m far from immune myself (looking at my comment history, it’s upsetting to see the same verbal tics repeated when I have something to say).
⸻
1. If you don’t know a word well enough for it to come to mind when you’re looking for a word for something, you may not know it well enough to use it in your writing.²
2. Cue the people who will disagree. Suffice it to say that I occasionally will use a thesaurus to pull up a word that’s just out of reach, especially as my brain gets older and weaker, but even that I try to avoid.
3. When I got my MFA, there was a visiting writer who had published a creative writing book which was largely based on his former students’ transcriptions of his lectures. During the lecture he gave, even though he was speaking extemporaneously, he would speak word-for-word whole paragraphs from the book.
>Although 80 % of the content was my own writing, the fact that it was run in a LLM enginee for grammar and vocabulary cross-check, made it failed the "probable written by AI " metric; and it was rejected.
should be:
>Although 80% of the content was my own writing, the fact that it was run through an LLM engine for grammar and vocabulary cross-checking meant that it failed the "probably written by AI" metric, and it was rejected.
That's ten corrections that are fixing straightforward typos and/or grammar and vocab mistakes in one sentence. Most are fairly objective, though I can understand different opinions on 2, 7, or maybe 10.Relying on AI for editing seems to have atrophied the author's writing if that is what he or she thinks is worth publishing on a blog like this. I would suggest practicing editing your own work and not even thinking about passing it through AI (especially when you were told not to use any AI!) to edit for a while. Given that English is not your first (or even second or third) language, I would also suggest having a native speaker with some demonstrable writing skill review your writing and give feedback on how to make it more idiomatic. For example, writing being "run through an LLM" rather than "run in an LLM" is a relatively subtle difference compared to the others, and it's very very common for preposition mistakes like this to show up when writing in another language than your first. I am still hopeless with French prepositions.
First of all, they will make substantive changes you didn’t intend. The meaning will get changed, errors will be introduced. Tone will be off, and as the author says, your voice will disappear. There is no single “correct” way to write something. And voice and tone are conveyed with grammatical and usage variation. Don’t give that up to a robotic average.
Secondly, you will never improve, or even maintain, your own writing skills if you don’t actively engage with the suggested changes. You also won’t fully realize half the purpose of writing, which is to understand the topic better yourself. Doing the work of editing your piece will help you understand the subject even better. If you just let the machine “fix” your errors, you’ll become a worse writer and less of an expert over time.
>I just wrote what my brain is instructing to type (might not reread it before posting)
Why would I put effort into reading something that had no effort put in by the author?
This guy needs an editor, AI or otherwise.
As English is not my first language, I do run into problem where the line between fix my clumsy sentence and rewrite my thought is very thin. Same with writing "boring" technical explanation and more approachable content. I'm getting pushed back for both.
And I've definitely used it when I can't remember that one stinking word that I know exists and is perfect for this occasion.
It is very unfortunate that we start value creativity and imagination only when we lose them. Although a good pill for creativity in my case is ... boredom and routine. I cant stand doing the same thing again and again in the same fashion. Maybe you might give it a try :)
> And for people who successfully taken back their creative writing skills, how did you do it?
“AI is one possible reference for my actual writing”. Generate info and perspectives, but only ever write stuff yourself. Something about this for me forces me to stay in my own “”writing voice”, at least personally, for the various places I use AI tech in. I think of the tech as a chess engine; they are better than any human player but I use them to help me gain perspective rather than cheat. Otherwise, why bother playing chess?
You're trading ability and competence for convenience.
1. There was a lot of slop pre-AI. In fact I’d say the majority of published writing was bad, formulaic, and just written to manipulate your emotions. So in some sense, I don’t really think pre-AI slop had more value. It’s just cheaper to make now.
2. AI has prompted me to study more off-beat writers that followed the rules of language a little less frequently. This includes a lot of people from circa 1890-1970, when experimenting with form was really in vogue.
3. Which brings me to my third point, which is that no matter how much the AI actually knows about writing, the person prompting it is limited by their own education and knowledge of writers. You can’t say, “make me a post in the style of Burroughs” if you don’t know who Burroughs was, or what his writing style was. So in a sense there is an increased importance to being educated about writing itself. Without it you’re limited in your ability to use AIs to write stuff and in your awareness of how much your non-AI written work is influenced by AI writing.
On the other hand I am way stricter and harder on myself when writing.
This is something I observed for example.
I don’t use AI for writing. Since I mainly read classics regarding belletristic I don’t fear being served AI generated content.
I still don’t see why or even how to write with AI creating large bodies of text like a book for example.
It is like ghost writing. In the best case it is good one, but style changes due to LLM model changes can kill a book because the tone suddenly is a totally different one.
So much content is just straight copy/pasted from the LLM now. Articles, blog posts, linked in posts, reddit comments, etc. Even just using the LLM for 'editing' tends to shift the voice to an obvious LLM voice when used naively. It is getting worse too. Last week a co-worker sent me a screenshot of Claude for me to review their "work", which was just whatever Claude made up.
Usually, if something is very obviously unfiltered LLM output, I just stop reading.
I do use LLMs for writing myself. They are useful, but are poor authors.
I've never been surprised at AI writing. Emotion the biggest part of communication and these grey boxes have none.
I didn't expect AI to write 95%+ of my code, but here we are.
I can't say whether I feel worried or not. I am trying now to gamify manual coding by having to review and edit a random file from my work codebases a day and still do occasional leetcodes and katas.
But overall I enjoy coding less for sure. I can't think of the last time I spent heavy time focusing on a refactor or lower level design abstractions.
I don't think I will be still coding 5 years from now. The joy is just not there.
AI always seems so verbose and wordy.
I get that the mainstream ones have been RLHF'd to death, but surely there must be others that are capable?
For now, I just keep scrolling until I find something from before 2020, which is much more likely to be purely human-made and edited.
I never passed any AI writing as my own. I would feel utterly awful. Also, I love tweaking words until they sound perfect.
The number of people who just nonchalantly admit that AI writes their messages is honestly scaring me.
you are missing the writing era, which is gone. whatever we have now will slowly congeal into cold grue that will get a name or names
the madness of bieng chastised for speakerphoning and disturbing people gulping the slop
what do we call that?