Quillx is an open standard for disclosing AI involvement in software projects (github.com)

by qainsights 36 comments 23 points
Read article View on HN

36 comments

[−] Kiro 61d ago

> Crafted like poetry

I find this idea that humans all of a sudden write beautiful code very funny. Most code produced by hand is dirty and filled with ugly hacks. The argument might work against AI art but falls flat for programming.

[−] crimsonnoodle58 62d ago
I would think the term 'vibe coded', 'vibed', '100% vibes', etc would be far more appropriate and well known, than 'lorem ipsum' when it comes to generating code without reviewing the output.

If I saw that badge on someones github I would think it had something to do with lorem ipsum text generation, rather than anything to do with AI.

[−] jannniii 61d ago
Nice idea, but the labels are a bit too opinionated for me.

Literally all my code has been ”ghostwritten” for the past 18 months. Does not sound like something enterprise customers would like to hear and try to understand what it means.

[−] wewewedxfgdf 62d ago
We should assume projects have AI/LLM development assistance unless stated otherwise.

You may have noticed the absolutely vast array of AI development tools and assistants and IDEs and integrations - this is a reasonable indicator that developers are actually doing AI/LLM development.

[−] rzmmm 61d ago
In academia there has been a widespread practice to simply include a sentence about how AI has used in articles. It's simple and it works well.
[−] varun_ch 62d ago
A little ironic that the README, SPEC.md and the poster's comment here all smell of LLM writing!
[−] pointlessone 61d ago
How can you expect adoption of this scale if the AI side is so obviously negative? You might as well label the full AI option “I drown kittens” and go write a long post about how AI users won’t engage with your AI usage disclosure initiative in good faith.

To have any chance of adoption you have to be at least a little strategic. You may think AI is pure evil but you have to make some concessions to AI users to incentivise participation. Try making it sound neutral through out the spectrum, use neutral colour scheme. Yes, you’re not telegraphing your position on AI so obviously any more but you might get some useful information out of others.

[−] big-chungus4 61d ago
The labels are not transparent - if you see this badge on GitHub readme, you won't be able to tell that it is about AI usage. I also don't find those labels to be particularly useful, when you are proposing an actual standard, you have to sit down and design it carefully and thoroughly, which I don't believe happened here. So, it looks cool, but I don't think it's super useful
[−] hedora 62d ago
(1) Why?

(2) The code I write with AI doesn’t fit on the scale.

[−] peteforde 62d ago
Given the reality that there are a lot of people who [fairly or unfairly] judge anything that uses "AI" in a decisively negative way, what possible advantage is there in giving people a reason to dismiss your project without evaluating it on its own merits?
[−] qainsights 62d ago
AIx is an open standard for disclosing AI involvement in software projects - expressed through the language of authorship. Not a judgment. Just transparency.
[−] nunobrito 61d ago
Color code is the other way around.

Red should meant manual human review without automated tools nor AI.

Green for proper AI review and tests verifying the expected input/outputs.

[−] yjftsjthsd-h 61d ago

> Every line deliberate. Crafted like poetry. Human-authored entirely.

This is perhaps overly generous to pure-human authorship. These days, when I write code I like to think I know what it does. I still wouldn't call most of it "crafted like poetry". When I was just learning though, I wrote plenty of code 100% without AI (in fairness, it didn't exist) that I had little understanding of, and it was only "deliberate" in that I deliberately cajoled it into passing the tests.

Or put differently: don't conflate human authorship with quality; people can write garbage without needing AI help.

[−] easygenes 62d ago
This is very similar to a project I created https://github.com/Entrpi/autonomy-golf and have been using as a gamified development process on active projects.

The key insight was to not just handwave or guess at how much is automated, but make evaluation and review part of the continuous development loop. I first implemented in https://github.com/Entrpi/autoresearch-everywhere where I used it to deliberately automate more, in the spirit of Karpathy's upstream (and to very good effect. I have some of the best autoresearch results anywhere, and the platform is far more robust than it started).

[−] pbronez 62d ago
Neat idea. I like the five point scale