Allow me to get to know you, mistakes and all (sebi.io)

by sebi_io 163 comments 318 points
Read article View on HN

163 comments

[−] borski 63d ago
I find that AI is very useful for getting me past the 'blank page' writing block, but inevitably it writes in ways I would never, and so I end up editing it heavily. But, for me, a boy with ADHD, editing something is infinitely easier than writing it from scratch.

I think this is the opposite of how most people tend to use LLMs, and I actually think my way is the "better" way. My issue has never been the act of writing well, or clearly expressing what I mean... it has been the inertia of putting words on a page at all.

(and an LLM had nothing to do with this comment :P)

[−] moondance 62d ago
I can relate to the inclination, but so many new insights and moments of inspiration are necessarily confined to that painstaking iterative line-by-line process of real writing. When you are simply prompting and editing, you will fill the page (and it might even sound like “you”), but you will not have that delightful experience of encountering something unexpected along the way to filling it.
[−] gtowey 62d ago

> I find that AI is very useful for getting me past the 'blank page' writing block, but inevitably it writes in ways I would never, and so I end up editing it heavily. But, for me, a boy with ADHD, editing something is infinitely easier than writing it from scratch.

As someone who also has ADHD, I would beg you to reconsider this strategy.

Getting the first thoughts down on paper is the hardest part, especially for those who may have trouble with focus, but that's exactly why you should practice it!

It's 90% of the task, it's where you have to practice executive function to plan what you're going to write in the overall broad sense. Please don't give up on it and hand that task over to the LLMs There are a lot of strategies you can use to break through that barrier and you'll be better off by strengthening that muscle instead of leaving it to wither.

[−] HPsquared 62d ago
Similar for me, I find it's an absolutely amazing "creative unblocker".

It generally has enough "activation energy" to get me over the hump of wherever I've been mentally stuck.

[−] glitchcrab 62d ago
Yes this is my use-case for it too - it's great to generate a structure which I will keep but I always end up reworking all the actual content so it sounds like me. It is a great way to get past the 'getting started' hurdle though.
[−] Nashooo 62d ago
You're the first articulating my exact use case with AI as well! It really helps get me in 'the zone'. I actually now dictate as well and then the AI rewrite it and then I start editing. To lower the barrier even more.
[−] nlawalker 62d ago
Same, it's the push that gets the ball rolling down the hill.

>clearly expressing what I mean

I have use for it here too - I use it like a "power thesaurus" when I've got the feeling that the word I have doesn't have quite the right connotation, or to test out different versions of rephrasing something when I feel it could flow better or be clearer but I can't quite get my finger on it. But I don't just take output and paste it, I use it like a pair programmer for writing, where I'm the driver and the AI is the observer.

[−] Terretta 61d ago
If our blank page is first filled by the mediocritizer machine, are the ideas even ours?

To be successful in a workplace, team members fall into roles. It's interesting to consider which roles would have an LLM write its median ideas first: https://internalchange.com/order-profiles-training-materials...

[−] birdsongs 62d ago
Have you tried free/automatic writing? I don't know what the term is actually, but just stream of consciousness, putting words to paper, zero filter or pause, straight from the brain.

I usually start with "I don't know what to write but" and then just don't let myself stop. I have to keep putting words down, only rule.

It sometimes starts or turns gibberish, but eventually I hit a flow and real stuff starts to come out, and then I'm just writing.

I've seen the concept applied to art/drawing as well. I highly recommend trying!

Quick edit while I can: after googling this there's a lot of woo/spiritual stuff about it. I don't really subscribe to that, I just think it's a great tool to get out of your head and enter the flow state of writing, when it feels inaccessible.

[−] ugtr3 62d ago
I was also like this but I managed to wire my brain to get over the anxiety/fear whatever it was to getting started and it’s worked magically.

And I’m thankful - I’d really hate to rely on something else to get me going…

[−] treenode 62d ago
AI writing sucks. The punchy words, the hyperbole, the monotony and pervasiveness are all exhausting. But I can’t deny there’s one upside. People who grew up speaking and living in other languages, people whose english is poor, finally have a level playground. It’s a great equaliser of our english writing privilege.

The thing that worries me most is that it's going to redefine the way we write. We absorb language. To compensate for all this AiSpeak I consume, I need to read more literature.

What’s human writing going to look like in a few years if this trend doesn’t stop? I believe that the LLMs will catch up soon and introduce more variance and fewer words designed for impact in their language, delivering us from this AiVerse into one where AI writing is almost indistinguishable from human writing. But until then, we must read more.

[−] arjie 63d ago
I really don't mind text filtered through an LLM per se. But I prefer high signal-to-token so to speak. The way humans talk and write means that the seemingly extraneous text they add often provides an interesting insight into the thought patterns of the person, and therefore mistakes or even pointless monologues can be interesting.

This is not always true. Once there was an online reaction to short content that made people treat "long-form" content as desirable entirely due to its length. I rather like reading books and the New Yorker's fiction section when I still subscribed, but much of this "long-form" content was token-expansion of a formulaic nature which I did not enjoy. LLMs have mastered this kind of long-form token-expansion.

This is assuming people are using an LLM in good faith, obviously. One day, perhaps LLMs will learn to express what someone is saying in an elegant way that is enjoyable for people like me to read. But even then, I will have the difficulty of distinguishing whether this is a human speaking through an LLM in good faith or a human who has set up a machine that is set up to mimic a human.

The latter is undesirable to me because I have access to the best such machines at a remarkably low cost. Were I to desire a conversation with an LLM, it is trivial for me to find one. I'm not coming here for that[0].

A sufficiently insightful LLM which prompts my thinking in certain ways wouldn't be unwelcome to me, I suppose. I have a couple of my friends for whom I still go on Twitter to read what they say even after I have stopped using the site routinely. If I found out the posts were entirely an LLM I think I would still read them simply because I find the posts useful and with sufficiently high signal-to-token.

0: Certainly, if every place only spoke about things I was interested in and never in things I was not interested in, I wouldn't need separation of interest spaces at all. But the variation of interest vectors for different humans has made this impossible.

[−] pmoati 63d ago
I totally agreed with you. I'm French (nobody is perfect ^^), I'm not so fluent in english and I'm dyslexic, that why I often write my message, then I ask to Claude to translate it in english because i'm feeling I will lose the credibility of my message if there is too much mistake... But you're right, so this message is not translated by LLM :D
[−] stingraycharles 63d ago
Yeah, some colleagues started using ChatGPT for internal communication as well. While we don’t like to mandate or prohibit anyone from using any tools, we did need to make it really clear to everyone that this is not productive. Grammarly to make small corrections to external recipients is fine. Using ChatGPT to “polish” your message is not. If you’re not sure about your English abilities, we offer you free English lessons and encourage giving each other feedback during chats.

LLMs shouldn’t be used for communication at all if you want any form of authenticity.

[−] charlie0 63d ago
This is starting to become my latest pet peeve, people using Claude to write their messages in Slack. I'm going to just stop communicating via text with these people.

It's one thing to have Claude polish a message and another thing for it to write out an entire message.

[−] DrammBA 63d ago
It feels so disrespectful sometimes too, having to read a long paragraph that conveys so little meaning knowing full well the original prompt was probably very short and I'm now wasting extra time parsing the hollow LLM text expansion.
[−] Leomuck 62d ago
I very much agree with this. I had several experiences where I wanted to express something in a very particular way, told a LLM about it, and what came out was just so generic that it really wasn't authentic. It didn't represent me at all, not the morals I have, not the way I talk, not the way I want to express things. I do think more and more that authenticity and character are what we need to preserve with all power we have if we don't want the internet to become just a gateway for generic back and forth. After all, the internet was introduced so humans could connect and share.
[−] eterevsky 62d ago
I don't often use AI to cleanup my texts, but when I do, I fully own the output. I make a conscious decision whether to leave in every AI suggestion or not. The final text _is_ what I want to say.
[−] solatic 62d ago
The way the post is written, I wonder if the author is working for a company going through a growth spurt and where, through sheer size, everything is becoming more "corporate".

There's a huge difference between having AI clean up a text you send privately to someone you have worked closely with for years, versus a broad spectrum text sent by a VP to hundreds of people or more. The first case is reprehensible, for the reasons the author lays out. But as for the second case, corporate doublespeak has been a meme since long before the advent of AI and it would remain even in some AI-pocalypse. Just because your boss puts out sanitized language in a mass communication, doesn't inherently mean your boss won't still be present and real with you in a more private setting.

[−] ahf8Aithaex7Nai 63d ago
That’s exactly why I’ve refused to use autocomplete on smartphone keyboards from the very beginning. I want to express myself in my own words.

In a work context, of course, things are a bit different: I want to move the project forward and not jeopardize my future paychecks. Authenticity tends to take a back seat there. However, I’d be more concerned about inefficiency. Is it really necessary to run every piece of communication through ChatGPT to refine the wording? Are you sure nothing gets lost in the process? Doesn’t that end up wasting a lot of work time without adding any real value?

And on top of that, it leads to alienation and frustration. If you talk to me as if you were an LLM, don’t be surprised if I talk to you as if you were an LLM.

[−] am17an 62d ago
Working in open source, I've now heard a wide variety of disabilities that people have and they have to be aided by an LLM for writing even descriptions of their PRs.
[−] bushido 62d ago
I've seen this come up in a few comments, so I'm just adding it to a separate one in case it helps folks.

Something I have seen a lot of people talk about in the comments here, as well as do in practice within my company and friends, family, etc., is that they say something and then let Claude or GPT rephrase it to be added as a prompt that they'll then use.

In my experience, this will almost always bring about worse results than if you communicated directly with the LLM. I believe this happens because of a few reasons.

1. LLMs tend to do word inflation in that they'll create plausible-sounding prompts, but the words that they introduce have a higher propensity to create worse cookie-cutter results from other agents, coding assistants, writing assistants, or any other form that has been used.

2. By putting a layer in between what we're saying and what the LLMs interpret, we're not honing our ability to articulate and prompt better and wholly depend on the intermediary getting better or being able to interpret better, which does not translate well in practice.

3. Anecdotal, but in my case, when I was doing this myself, it was because I assumed I was harder to understand and not articulate enough to get good results. So I tried speeding up the results by trying to use an intermediary. What I learned, though, was training myself to be articulate and to not doubt myself was easier than getting results from the LLM interpreters.

of course with anything, ymmv.

[−] santamex 62d ago
I havily use llms for internal communication. I receive docen request per day from colleagues asking me very specific stuff by mail or teams about processes, setups, master data, my particular experiences with approaches, for contacts within our big corp or just general knowledge questions and how I would recommend to tackle certain problems: Setting up conditions in sap, where to find certain info or just send them current setups. Also they ask me about strategic advices. I use my personal knowledge base to automatically prepare drafts of the answers based on previous answers to other colleages. Before the llm time I could barely help all of then. I got more productive by x-times. I then digest the emails again back to my knowledge system. People have no problem with receiving obviously llm written answers. But because of the particular domain knowledge they know it can only come from me. Excuse my writing, this did not went though the same system :)

Edit: And now I forgot the most important. When the knowledge the llm retrieved is insufficient to answer colleagues question or the agent skill can not execute the requested task from my colleague, it asks me just for the missing info or skill and with me (the human) in the loop work is done x times faster. Eventually it will replace me and all my colleagues one day. Looking forward to do other stuff then

[−] wafflebot 62d ago
"Semantic ablation" is the term I learned for this right here on HN

https://news.ycombinator.com/item?id=47049088

[−] devsda 63d ago
Imagine going to work or a social meeting where everyone looks and sounds the same(or just a limited set) all with the same perfect tone, body language and communication style. Sounds like a nightmare and I would find it hard to relate and get that "perspective", when there is nothing to differentiate a person.

I guess everyone using LLMs for text is similar to that. If everyone uses the same LLM style, its hard to understand where the other person is coming from. This is not a problem for technical and precise communication though(the choice of LLMs in that context has other risks).

It is also strictly not an LLM capability problem because they can mimic or retain the original style and just "polish" with enough hints but that takes time, investment and people go through path of least resistance. So, we all end up with similar text with typical AI-isms.

There are other reasons to dislike LLM text like padding and effort asymmetry that have been discussed here enough.

[−] stephen_cagle 63d ago
I largely reached the same conclusion recently => https://stephencagle.dev/posts-output/2025-10-14-you-should-...
[−] jay_kyburz 63d ago
There are two ways to write an email. One is to keep it short and to the point that so there are obviously no errors, the other is to waffle on and obfuscate the message with an LLM so that the reader's eyes glaze over...or something like that.
[−] weinzierl 62d ago
I feel the same and I experience less pressure when writing because for the first time it seems being a bit sloppy can be advantageous.

The only thing is that my anecdata contradicts it. My AI cleaned up writing seems to fare much better and this seems to be true across all channels. To be clear I do not mean AI generated just AI cleaned, that is spelling, punctuation, grammar mainly, the occasional word order change.

In the end it's about getting the message across first and "get to know me" second and proper and clear expression helps a lot with the first.

[−] dexterlagan 62d ago
I used to use LLMs to 'clean up' my own writings, and in the end I agree with the author here: it doesn't really help. The reader will have this impression of 'too perfect', and will have a diminished feeling of value, of honesty. I think we would benefit from a standardized way of signaling text and content that is exclusively human. Say, some sort of logo that says 'genuine', 'untouched by the hand of AI'. I'll be thinking about a way to do this.
[−] proof_by_vibes 61d ago
I really think this gets at the heart of the distinction between language as it pertains to how we connect with others and language that records our observations of the world and its history.

I think the ideal mode for engaging with LLMs should be as interactive encyclopedias. They are excellent at rewarding curiosity when someone passionate about learning sits in the driver seat of this kind of tool. There is something to be said about the benefits of _active_ learning over _passive_ instruction.

Nonetheless, it's impossible to consistently discern fact from fiction when in the form of narrative. As such hallucinations are the key counterexample for why we can't have a world where LLMs can be effective instructors. Because it's one thing to be exposed to the structure of a narrative, and to read it as grammar, but you don't ever know how to _feel_ about these structures until you become tuned to another human being.

[−] II2II 62d ago

> When you run your message through an LLM, it will inevitably obscure what you actually wanted to say; we choose words for a reason after all - even if they’re sometimes not the right words.

We may choose words for a reason, but sometimes we choose the wrong words. Sometimes it may be closely spelled words, and you choose the incorrect version. Sometimes it may be because our understanding of the definition of a word is wrong. Either way, it can be problematic when you say one thing when you meant to say something else.

Now I grew up in the olden days. I reach for a dictionary in such cases. On the other hand, I can certainly understand why people would reach for an LLM. LLMs can examine an entire document at once, it will catch errors that you are not familiar with, and it will catch a much larger range of errors. Is it perfect in doing so? Of course not, but it is better than nothing.

[−] quectophoton 63d ago
I think there was an SMBC comic about this topic, but I don't think I can find it, and the site doesn't exactly make it easy. I don't even remember if it was pre-2020 or not.

It was about how people would get a thing (a robot?) that would repeat whatever they said but in a more fancy way (or something along those lines), to make them sound smarter. Then the people would start depending on these robots to communicate at all, to the point their speech degrades and they start making unintelligible noises that the robots still translate into actual speech.

EDIT: Found it, from 2014: https://smbc-comics.com/index.php?id=3576