That was an informative post but Jesus Christ on a bicycle, reign in the LLM a bit. The whole thing was borderline painful to read, with so many "GPTisms" I almost bailed out a couple of times. If you're gonna use this stuff to write for you, at least *try* to make it match a style of your own.
To add a tip on how to make it match your own style: You can get decently far by pointing it to a page or so of your own writing, and simply tell it to review the post section by section and edit it to match the tone and style of the example. It's not perfect by any means, but it will tend to edit out the type of language you're not likely to use, so really to make it sound less LLM-like, almost any writing sample from a human author works.
I’d much rather read someone’s imperfect writing than the soulless regression-to-the-mean that LLMs produce. If you’re not a native speaker or don’t have confidence in your writing, I’d urge you to first ask for an edit by another human, but if that’s not an option, to be extremely firm in your LLM prompting to just have it fix issues of grammar, spelling, etc.
Almost nobody recognises well written AI texts. I've seen plenty of AI written text pass right by people who are sure they can always tell. It takes very little, because the vast majority of AI writing you spot involves people doing nothing to make it clean up the style.
I find it quite funny how this got downvoted. My statement is based on concrete knowledge of a project that tested this, and demonstrated quite conclusively that most people consistently fail to detect AI written text that's gone through even very basic measures to seem more human.
I did bail out because of this, despite being pretty interested in the content. I love reading, but I cannot stand LLM “writing” output, and few things are important enough for me to force myself through the misery of ingesting ChatGPT “prose.” I only made it to the second section of this one.
100% agreed. Maybe this inner reaction will disappear over the years of being exposed to the GPT writing style, or maybe LLMs will be "smarter" on this regard, and being able to use different styles even by default. But I had the same exact feelings as you reading this piece.
It's really simple to fix by asking an LLM to apply a style from a sample, so my guess is a lot of product will build in style selection, and some provider will add more aggressive rules in their system prompts over time.
I would recommend using guard rails to guide tone, phrasing, etc. This helps prevent whole categories of bad phrasing. It also helps if you provide good inputs for what you actually want to write about and don't rely too much on it just filling empty space with word soup. And iterate on both the guard rails and the text.
It’s not even just about the style. It’s a matter of respect for your readers. If you can’t be bothered to take the time to write it, why on earth should I care enough to take the time to read it?
Yes, but you need a style before :) But in TFA's author case, he actually had a few other blog posts which feel not LLM generated to use as an example, I agree.
But for plenty of applications it doesn't need to be your personal style. It only needs to be your personal style if you want to present it as your own writing. Otherwise it just matters that it's well written. A catalogue of styles would work well for lots of uses.
I mean, there are lots of people here that writes well enough that giving it some style samples and tell it to adapt the text to "this style: [insert post]" wouldn't be the worst idea.
If I recall correctly, the Fossil SCM uses SQLite under the covers for a lot of its stuff.
Obviously that's not surprising considering its creator, but hearing that was kind of the first time I had ever considered that you could translate something like Git semantics to a relational database.
I haven't played with Pgit...though I kind of think that I should now.
The sqlite project actually benefited from this dogfooding. Interestingly recursive CTEs [0] were added to sqlite due to wanting to trace commit history [1]
When you import a repository into Phabricator, it parses everything into a MySQL database. That's how it manages to support multiple version control systems seamlessly as well as providing a more straightforward path to implementing all of the web-based user interface around repo history.
SVN in SVN for sure, it's a well made product. The market just didn't like it's architecture/UX that doctates what features available.
CVS is not much different from copying files around, so would not be surprised if they copied the files around to mimic what CVS does. CVS revolutionized how we think of code versioning, so it's main contribution is to the processes, not the architecture/features.
> only a handful of VCS besides git have ever managed a full import of the kernel's history. Fossil (SQLite-based, by the SQLite team) never did.
I find this hard to believe. I searched the Fossil forums and found no mention of such an attempt (and failure). Unfortunately, I don't have a computer handy to verify or disprove. Is there any evidence for this claim?
I hate to blow our own horn, but I'm gonna...if you are interested in seeing this kind of kernel-development data mining, fully human-written, LWN posts it every development cycle. The 6.17 version (https://lwn.net/Articles/1038358/) included the buggiest commit and much surrounding material. See our kernel index (https://lwn.net/Kernel/Index/#Releases) for information on every kernel release since 2.6.20.
45 comments
I’d much rather read someone’s imperfect writing than the soulless regression-to-the-mean that LLMs produce. If you’re not a native speaker or don’t have confidence in your writing, I’d urge you to first ask for an edit by another human, but if that’s not an option, to be extremely firm in your LLM prompting to just have it fix issues of grammar, spelling, etc.
Obviously that's not surprising considering its creator, but hearing that was kind of the first time I had ever considered that you could translate something like Git semantics to a relational database.
I haven't played with Pgit...though I kind of think that I should now.
[0] https://sqlite.org/lang_with.html#recursive_query_examples
[1] https://fossil-scm.org/forum/forumpost/5631123d66d96486 - My memory was roughly correct, the title of the discussion is 'Is it possible to see the entire history of a renamed file?'
a fossil repository file is a .sqlite file yes
CVS is not much different from copying files around, so would not be surprised if they copied the files around to mimic what CVS does. CVS revolutionized how we think of code versioning, so it's main contribution is to the processes, not the architecture/features.
> only a handful of VCS besides git have ever managed a full import of the kernel's history. Fossil (SQLite-based, by the SQLite team) never did.
I find this hard to believe. I searched the Fossil forums and found no mention of such an attempt (and failure). Unfortunately, I don't have a computer handy to verify or disprove. Is there any evidence for this claim?
Or see LWN on Monday for the 7.0 version :)