Ngl I’m reading this article after having used ai to build a beautiful front end that is pixel perfect.
Yes ai can’t see, it only understands numbers. So tell it to use image magick to compare the screenshot to the actual mockup, tell it to get less than 5% difference and don’t use more than 20% blur. Thank me later.
I built a whole website in like 2 days with this technique.
Everyone seems to have trouble telling ai how to check its work and that’s the real problem imho.
Truly if you took the best dev in the world and had them write 1000 lines of code without stopping to check the result they would also get it wrong. And the machine is only made in a likeness of our image.
PS. You think Christian god was also pissed at how much we lie? :)
It's hard to interpret comments like this because we all have different standards and use cases. So it would really help if you could link to it. Even in a roundabout way if you want to avoid the impression of self-promotion.
I built a few websites, most of them it wouldn’t be wise to place on here. But someone emailed me about this, so I’ll do my best to help I did build https://hartwork.life for a friend with a design from open ai (pre google stitch which is my current preferred tool)
Here is the line from my Claude code to get something like this. Keep in mind I didnt use mcp for playwright with this particular implementation but it is my preferred method currently. Tha
CRITICAL - When implementing a feature based off of an image mockup, use google chrome from the applications folder set the browser dimensions to the width and height of the mockup, capture a screenshot, and compare that screenshot directly to the mockup with imagemagick. If the image is less than 90% similar go back and try and modify the code so that way the website matches the mockup closer. If a change you make makes the similarity go down, undo it, and try something else. be mindful the fonts will never be laid out exactly like the mockup, please use blur at a max of 10% to see if the images are closer matching. If you spend more than 10 cycles screen-shotting and comparing, stop and show the user how similar they are mentioning any problems
The more text the harder it becomes and it’s why we really need the blue because fonts are almost always rendered differently.
Thanks. I would say yeah, it's not too bad, but it is also a pretty simple site.
There are some interesting issues that probably relate to your workflow, like the nav links are different sizes, the icons too. And the resolution of some of the images/icons on a MacBook is poor. But I suspect that's because a simple ImageMagick raster diff will fuzz over those kind of differences.
I wonder if you can make some tweaks or find a better representation than pure raster screenshots to fix this. Can't really deal in vector images because AI sucks at outputting those, and you can't print a web page to SVG.
There was a super niche website framework that only used SVG a while ago. Would be funny if that kind of thing makes a takes off just so AI can do better.
I feel like 2 days to build this is a bit much given the simplicity. I think the point stands.
I will grant you that this is more tasteful than most of the AI sites I see. It’s a good looking little site but nothing here screams, “AI really accelerated this.”
Thank you. Yes took a bit but still way faster than by hand. There are other store pages that are also implemented. This 1 page took me like an hour lol.
1. The main page asks for an email to be notified when the hoodie is available to buy, but I can add the goodie to my shopping cart and proceed to check out
2. The product page mentions a 6’ model but there is no model in the images
3. The check out page says “there are no payment options, please contact us”
Please share what you created! I think people have very different views for what is a good interface, or a tolerable one. I think as a front-end developer and designer I notice a lot of problems most people don't care about.
Sorry to be the one to break it to you, but no, you did not design a website just fine with AI. It’s not even just “good”. It’s average. Painfully average, to the point of it being easily mistaken for a scam.
I completely disagree. Making a average website is the goal of most businesses that are selling an actual product. His website looks modern and welcoming and does not distract or take away from the actual content. This exactly what most people should aim for. Some actual constructive criticism is some o the icons in the example log mood look weird on my phone, with really small emojis overlapping the face emoji
No one should aim for average, that’s an incredibly defeatist way of looking at it. Besides, design matters. I know HN is frequented mostly by people with very little interest in such topics, but design absolutely matters.
And yes, while the author’s website is perfectly passable, it is by no means “good”. People pick up on that, they might not know they do, but they do. Design wouldn’t be an industry and a school by itself if it didn’t matter and just the average were good enough.
A lot of people don't make websites for a living. If they are a small business and have other things to worry about in terms of actual work, being able to prompt for a clean, professional website frees up their time and means they don't have to use additional funds to hire a developer.
When I shared this I wasn't thinking about the marketing site -- I meant to show the product itself. Given the feedback here I no longer think it's a good representative as-is, especially with the generic SVGs / rounded cards
I can't help but think you and the other commenters reducing this to slop didn't even try the product. I thought it went without saying that I wasn't posting to show a marketing landing page.
Well it works but it also looks like every other generic bootstrap based website with not even an original palette choice.
Great for a project like this, unusable for any client work
Is this a critique of the marketing page or of the product itself?
I didn't intend for this to be about the marketing page -- what you say is true of just about every marketing page. They're prevalent because they are good at distilling information without overwhelming the user. But I agree I can do better and will work on this more, I really appreciate getting this feedback
Most people look at pool chemistry/maintenance as painfully overwhelming, so for everyone to say this looks boring or mundane is a bit validating. No one has (yet!) said they don't understand the product, it's purpose, or it's value :)
The palette looks a lot like the basic colors from Bootstrap is more the thing. Which is what models tend to do a lot of the time, because you know, that's what's been learned.
Also:
- why shadows somewhere and not for other cards?
- why so many different font sizes with no hyerarchy?
- the paddings and margins are inconsistent and don't convey visual rythm. Sometimes there's too much space and sometimes they are too cramped.
ecc...
Is this an ok amateur website you couldn't have made this quick? Yes.
Is that a sufficient value proposition to say that Ai has solved frontend? no.
On a side note: Would you pay the actual real price of these models to achieve this same results, if they weren't subsidized by delusional billionaires? Up to you to respond.
Though it's somewhat clear from the use of tiles with the icon colours and the choice of border colours and all, I quite like it. I would have expected the colour theme from the navbar to be repeated because that's a more non standard palette. I would do that, maybe use a different tile layout (use a tile shape resembling a pool tile? Or even a rectangle signifying a typical pool shape) and create some vector icons for them using the navbar colour scheme.
Some more serious critique of things I noticed within 30 seconds:
- Text isn't selectable on the page.
- The tooltip in the "day 1" to "day 14" cards gets cut off by the border (I see this mistake ALL the time with AI-generated frontends btw)
- It's sparse and very long. I think the information could be condensed in half the size, and it would improve the presentation. This is personal preference though.
- The playbooks' "mark complete" are not persisted on reload or navigation.
All in all, it's functional and quite decent. I agree with the other people saying it looks generic, but I disagree on it being necessarily a bad thing for this kind of product.
I know nothing about pools so I can't comment on the accuracy of the playbooks. It's nice that there's so many of them, but given the LLM vibe of the text I'm slightly suspicious.
I see that you haven't finished the Automatic Sensor Automation. If you need help with that, contact me, I have experience with embedded product development and I like working on interesting projects :)
Why don’t these llm’s just allow you to pick from a set of standardised templates and then allow you to customise it from there in terms of both functionality and design?
What you have got as output is what I also get as output from llm’s - they suck the soul out of everything. Which is fine in the right context but that shouldn’t we as a species strive for in design imo.
Sorry but this website screams AI slop to me. Very sparse, lots of cards and random icons and rounded corners, looks like a few messages in to a Claude code session
I intended to share the product rather than the marketing page. I mean, I didn't intend to share this at all yet because it's not done, but when I saw people asking for examples..
But yeah, marketing site looks like a marketing site. I'm realizing now that a lot of my app's internal design/flair is missing from the marketing page -- so I appreciate your looking/commenting
Hey, one thing I made with this technique is hartwork.life a simple Wordpress store for a friend. I used open ai to design it for me, and then used the techniques above to get Claude code to implement the proper designs.
I am still trying to learn how to wrangle Claude properly, but I have this Claude.md[1] for that I used to make the website. In particular one of the last rules about using imageMagick for comparison.
I haven’t touched this website in a bit (waiting on client) so now I use playwright mcp for the screenshots and the browser interactions.
I started with a boilerplate but AI has been huge at letting me get what I want in terms of frontend building when I was never talented at design or css.
I built https://bridge.ritza.co (demo@example.com username and password if you don't want to sign up) as a trello/linear replacement without looking at a single line of code and it's both good enough for me and doesn't have the obvious AI frontend 'look' as it was copying from the starter.
Highlighted text "no per-seat pricing" is unreadable in dark mode on the home page (dark blue on black). It's surprising for me to see someone use this as an example of decent design because I'm somewhat sure this front page text coloring was never seen/reviewed by a human.
It's kind of wild in terms of how it will use different random designs, even given a specific style guideline. Even if you tell it to use a given framework like MUI or Mantine, it will stray largely from format.
I don't mind working through a lot of the UI myself, but it's definitely a shortcoming IMO... that said, being able to scaffold boilerplate or testing harnesses for for complex UI has been really nice overall. I came up with the following component as an image zoom component, where I can separately control the zoom in/out in under a couple hours... took longer to setup the CI/CD stuff than the primary component logic.
Eh, for many reasons I am not posting it here. It is a passion project for something and would lead to problems if I post it here. That being said I was trying to share the technique.
The reason for the post is that even without the actual website one should be able to envision the technique and how it may or may not work. Also if you look above recently I added links to the Claude.md for another thing I was working on for a friend that also had to solve this problem.
Just want to give people the tools to use ai well from my own findings
Software developers have been calling their stuff "beautiful" for years now. It's bullshit. Almost none of it is beautiful. They just mean it looks like whatever is trendy at the time.
I am a backend guy, so forgive my ignorance, but for web based apps I am confused what "pixel perfect" even means. I can build a site to look one way on my computer, it will most likely not look the same way on whatever device you use to access the site.
Feeding the model images for my local computer sounds like a recipe given my experience with the tools to have it over-optimize for the wrong end device.
Pixel perfect means it looks EXACTLY like the design comp.
It goes completely out of the window if the browser window isn't the exact size of the mockup.
You might charitably say that pixel perfect means that the implementation intersects with the design comp at some specific dimensions but where are the extra rules coming from, then?
It's an archaic term that conflates the artifact produced by an incomplete design process (an artist's rendering of what the web page might look like) with the actual inputs of the development process (values and constraints).
"Pixel perfect" is about attention to detail and consistency. Margins, padding, or the combination of these inside other containers will stick out when they're not consistent.
Here's an example that I personally encountered: what if you have a
Text
and it has a certain left margin.
Then another heading except it has a nested button component (which internally comes with some padding). Then the "Text" in both aren't aligned from section to section and it is jarring.
You could argue that what you built isn't novel or complex in any way -- (politely) it's basically a clone of hundreds of other SAAS homepages. i.e. its a perfect use-case for AI.
Perhaps the results would be different if you had a specific novel design or interaction in mind, and you wanted the AI to implement that exactly as you wanted.
It’s funny how there are a bunch of responses to this post all showing off their great AI designs that are literally the same thing with different (each horrible) color palettes.
It's really weird. I don't even care if they used AI as part of their development process. But most AI™ developed stuff is just so insanely soulless crap I can instantly can tell and instinctively close the tab.
If you're AI developed software was so great, I couldn't tell it's AI.
Like I cannot wrap my head around how anyone vibe slops and think "No, this is good. I will now proudly show this off."
I'm guessing the third word is "directly“? The D is cut off. And the grammar is wrong, should be "in your spreadsheets" - maybe that is another letter cut off?
Ask it to take control of a browser using something like Playwright and use the UI itself like an end user would and evaluate whether it is a good experience.
I've also used AI to build frontends that I'm more than satisfied with, and I think it can "see" perfectly fine. The frontier models are multi-modal and pretty good at vision. You can hook up your coding harness to your browser which will take screenshots of your rendered frontend and modify the code accordingly.
> If you are really good at something, you'll find AI sucks at everything.
I think it's the other way around. AI amplifies your software development skills. If you suck at software development, AI will follow your prompts and feedback and of course it will output an unmaintainable mess that barely works.
Here we are, listening to people who can barely put together a working website complaining that AI can barely put together a working website.
Before I switched over the a career in tech, I made my living from music - playing live, session work, etc.
Honestly, I'm probably one of the biggest skeptics when it comes to GenAI - but at least for music, the recent models (as in the past year) do not suck. They are actually really, really good for what it is.
I have yet to hear anything truly original produced by those models. They seem to converge to the mean, and end up sounding very commercial, very average sounding - but in the sense of average "professional music". Suno can generate music which would have taken real people years to learn, thousands of dollars of equipment to make / produce, and pretty much ready for airplay - most listeners will not bat an eye.
Hell, these "AI artists" have been booked to festivals, since people can't hear the difference, and are enjoying the music.
I figure it will go the same way in other fields. The average consumer loses track of what's human made and what's AI made, and frankly won't care. The people "left behind" are the artists, craftspeople, etc. that are frustrated it came to this point.
After years of writing native code for mobile apps I'm using Flutter, and finding that, if you do things step-wise, and check in intermediate results so you can easily roll back failed experiments, agent-assisted coding can accelerate your front end coding substantially, and you can deliver more polished results instead of obviously demo grade visual results that need refinement. And that makes it easier to communicate with your non-coder colleagues.
My first instinct reading an article (especially one about LLMs) is to scroll down to see the structure..
Anyway.
Do people get the impression that LLMs are worse at frontend than not? I'd think it's same with other LLM uses: you benefit from having a good understanding of what you're trying to do; and it's probably decent for making a prototype quickly.
Dunno. It’s really good with Preact + Tailwind. And I have to say that I think most problems can be solved this way and don’t require a special one-of-a-kind UI. In fact, the fewer special UIs I see, the better. I prefer standardized patterns unless they truly don’t fit a domain.
I don’t 100% agree with the “AI can’t see” because in a Ralph-loop against screenshots, it basically can (inefficiently). But more importantly I do find it generally curious how bad even frontier models are in spatial thinking. Say “Align these right to left unless it crosses the center” or “Keep this box always visible and collapse X to make space” and all hell break loose - maybe it might work but in an extremely slow, costly and tedious process.
Good design is not always logical. Color theory, if followed, results in pretty bad experiences. And interestingly, good design can't always be explained in a natural language.
Main thing is, it's very hard to get AI to have taste, because taste is not always statistically explainable.
The best I've gotten to is have it use something like ShadCN (or another well document package that's part of it's training) and make sure that it does two things, only runs the commands to create components, and does not change any stock components or introduce any Tailwind classes for colors and such. Also make it ensure that it maintains the global CSS.
This doesn't make the design look much better than what it is out of the box, but it doesn't turn it into something terrible. If left unprompted on these things, it lands up with mixing fonts that it has absolutely no idea if they look good or not, bringing serif fonts into body text, mixing and matching colors which would have looked really, really good in 2005. But just don't work any more.
Everything is nuanced and generalizations help no one. There are absolutely frontend apps where AI straight up crushes. Sure these much be less novel apps but most of what people work on is a CRUD-esque interface.
Creating a CODING.md or FRONTEND.md with rules and expectations for your LLM helps tremendously. You're right AI is not great at frontend (yet), but it does lift alot of the load. Like the top commenter says, there's not a great harness for iterative frontend building. But it can get you 80% of the way there if you give it some rules, and can do the annoying bits so you can concentrate on the 20% that is about design, effective communication and pixel-perfection.
I have found that working with llms for frontend to be better than most of the developers I have worked with. A majority of devs I came across only had enough frontend knowledge to be dangerous and to consistently introduce frontend entropy.
I'm a backend dev and I'm always hearing about how LLMs are dramatically better at frontend because of much more available training data etc. Maybe my perspective isn't as skewed as I've been led to believe and LLMs need close supervision and rework of their output there too.
AI is great at front end. Scroll based animations are the devil and these "boring" designs it defaults to are (more often than not) super intuitive. Sure, some design quirks it'll guess are annoying, but have you seen the web?
Agreed on AI limitations in originality, but the industry sucked at UIs for so long, my expectations are low. I’m just hoping for widespread use of models that take the viewpoints of newbs for UI testing.
One thing that helps with #2 ('It cannot see') -- Try playwright-cli. Your agent can use it to inspect the DOM, see what styles are applied to elements, simulate clicks, etc.
This is something that talk with some friends, How IA is doing things in front end is complelty different from Humans. Humans can select colors and themes based in their criteria, and IA only generate what they learn as a machine that they are, and It's not bad, but the thing is that people that use IA for develop front-end are adapting what IA generate, and in the other hand developer is adapting to client. Which are different approaches.
Who says it sucks at front end? Unlike Stackoverflow, AI does a great job of "center a div." I tend to like working from reference documentation which is great for Python and Java but challenging for CSS where you have to navigate roughly 50 documents that relate to each other in complex ways to find answers.
Like I don't give it 100% responsibility for front end tasks but I feel like working together with AI I feel like I am really in control of CSS in a way I haven't been before. If I am using something like MUI it also tends to do really good at answering questions and making layouts.
Thing is, I don't treat AI as an army of 20 slaves will get "shit" done while I sleep but rather as a coding buddy. I very much anthropomorphize it with lots of "thank you" and "that's great!" and "does this make sense?", "do you have any questions for me?" and "how would you go about that?" and if makes me a prototype of something I will ask pointed questions about how it works, ask it to change things, change the code manually a bit to make it my own, and frequently open up a library like MUI in another IDE window and ask Junie "how do i?" and "how does it work when I set prop B?"
It doesn't 10x my speed and I think the main dividend from using it for me is quality, not compressed schedule, because I will use the speed to do more experiments and get to the bottom of things. Another benefit is that it helps me manage my emotional energy, like in the morning it might be hard for me to get started and a few low-effort spikes are great to warm me up.
167 comments
Yes ai can’t see, it only understands numbers. So tell it to use image magick to compare the screenshot to the actual mockup, tell it to get less than 5% difference and don’t use more than 20% blur. Thank me later.
I built a whole website in like 2 days with this technique.
Everyone seems to have trouble telling ai how to check its work and that’s the real problem imho.
Truly if you took the best dev in the world and had them write 1000 lines of code without stopping to check the result they would also get it wrong. And the machine is only made in a likeness of our image.
PS. You think Christian god was also pissed at how much we lie? :)
Here is the line from my Claude code to get something like this. Keep in mind I didnt use mcp for playwright with this particular implementation but it is my preferred method currently. Tha
CRITICAL - When implementing a feature based off of an image mockup, use google chrome from the applications folder set the browser dimensions to the width and height of the mockup, capture a screenshot, and compare that screenshot directly to the mockup with imagemagick. If the image is less than 90% similar go back and try and modify the code so that way the website matches the mockup closer. If a change you make makes the similarity go down, undo it, and try something else. be mindful the fonts will never be laid out exactly like the mockup, please use blur at a max of 10% to see if the images are closer matching. If you spend more than 10 cycles screen-shotting and comparing, stop and show the user how similar they are mentioning any problems
The more text the harder it becomes and it’s why we really need the blue because fonts are almost always rendered differently.
There are some interesting issues that probably relate to your workflow, like the nav links are different sizes, the icons too. And the resolution of some of the images/icons on a MacBook is poor. But I suspect that's because a simple ImageMagick raster diff will fuzz over those kind of differences.
I wonder if you can make some tweaks or find a better representation than pure raster screenshots to fix this. Can't really deal in vector images because AI sucks at outputting those, and you can't print a web page to SVG.
There was a super niche website framework that only used SVG a while ago. Would be funny if that kind of thing makes a takes off just so AI can do better.
I will grant you that this is more tasteful than most of the AI sites I see. It’s a good looking little site but nothing here screams, “AI really accelerated this.”
1. The main page asks for an email to be notified when the hoodie is available to buy, but I can add the goodie to my shopping cart and proceed to check out 2. The product page mentions a 6’ model but there is no model in the images 3. The check out page says “there are no payment options, please contact us”
If you really want to see what I was messing with email me. I’ll share on non public forums.
I built this frontend with Sonnet 4.5 last Fall and I’m about to “launch” it
I used only prompts, but those prompts included ChatGPT’s research on Memphis design ;)
Using codex for front end design is like asking the valedictorian mega nerd to paint your portrait. Gemini and Claude are both artists.
Very bad results—as expected from an AI.
Nothing to brag about here.
And yes, while the author’s website is perfectly passable, it is by no means “good”. People pick up on that, they might not know they do, but they do. Design wouldn’t be an industry and a school by itself if it didn’t matter and just the average were good enough.
When I shared this I wasn't thinking about the marketing site -- I meant to show the product itself. Given the feedback here I no longer think it's a good representative as-is, especially with the generic SVGs / rounded cards
I didn't intend for this to be about the marketing page -- what you say is true of just about every marketing page. They're prevalent because they are good at distilling information without overwhelming the user. But I agree I can do better and will work on this more, I really appreciate getting this feedback
Most people look at pool chemistry/maintenance as painfully overwhelming, so for everyone to say this looks boring or mundane is a bit validating. No one has (yet!) said they don't understand the product, it's purpose, or it's value :)
Also: - why shadows somewhere and not for other cards? - why so many different font sizes with no hyerarchy? - the paddings and margins are inconsistent and don't convey visual rythm. Sometimes there's too much space and sometimes they are too cramped.
ecc...
Is this an ok amateur website you couldn't have made this quick? Yes. Is that a sufficient value proposition to say that Ai has solved frontend? no.
On a side note: Would you pay the actual real price of these models to achieve this same results, if they weren't subsidized by delusional billionaires? Up to you to respond.
- Text isn't selectable on the page.
- The tooltip in the "day 1" to "day 14" cards gets cut off by the border (I see this mistake ALL the time with AI-generated frontends btw)
- It's sparse and very long. I think the information could be condensed in half the size, and it would improve the presentation. This is personal preference though.
- The playbooks' "mark complete" are not persisted on reload or navigation.
All in all, it's functional and quite decent. I agree with the other people saying it looks generic, but I disagree on it being necessarily a bad thing for this kind of product.
I know nothing about pools so I can't comment on the accuracy of the playbooks. It's nice that there's so many of them, but given the LLM vibe of the text I'm slightly suspicious.
What you have got as output is what I also get as output from llm’s - they suck the soul out of everything. Which is fine in the right context but that shouldn’t we as a species strive for in design imo.
But yeah, marketing site looks like a marketing site. I'm realizing now that a lot of my app's internal design/flair is missing from the marketing page -- so I appreciate your looking/commenting
I am still trying to learn how to wrangle Claude properly, but I have this Claude.md[1] for that I used to make the website. In particular one of the last rules about using imageMagick for comparison.
I haven’t touched this website in a bit (waiting on client) so now I use playwright mcp for the screenshots and the browser interactions.
[1] https://github.com/panda01/hartwork-woocommerce-wp-theme/blo...
I built https://bridge.ritza.co (demo@example.com username and password if you don't want to sign up) as a trello/linear replacement without looking at a single line of code and it's both good enough for me and doesn't have the obvious AI frontend 'look' as it was copying from the starter.
> doesn't have the obvious AI frontend 'look' as it was copying from the starter.
Check out the other reply and scroll down a bit…
Share it. I used Claude earlier to test out its design capabilities and what I got as output was flat and tasteless.
I don't mind working through a lot of the UI myself, but it's definitely a shortcoming IMO... that said, being able to scaffold boilerplate or testing harnesses for for complex UI has been really nice overall. I came up with the following component as an image zoom component, where I can separately control the zoom in/out in under a couple hours... took longer to setup the CI/CD stuff than the primary component logic.
https://tracker1.github.io/image-zoomer-too/
The reason for the post is that even without the actual website one should be able to envision the technique and how it may or may not work. Also if you look above recently I added links to the Claude.md for another thing I was working on for a friend that also had to solve this problem.
Just want to give people the tools to use ai well from my own findings
Feeding the model images for my local computer sounds like a recipe given my experience with the tools to have it over-optimize for the wrong end device.
It goes completely out of the window if the browser window isn't the exact size of the mockup.
You might charitably say that pixel perfect means that the implementation intersects with the design comp at some specific dimensions but where are the extra rules coming from, then?
It's an archaic term that conflates the artifact produced by an incomplete design process (an artist's rendering of what the web page might look like) with the actual inputs of the development process (values and constraints).
Here's an example that I personally encountered: what if you have a
Text
and it has a certain left margin. Then another heading except it has a nested button component (which internally comes with some padding). Then the "Text" in both aren't aligned from section to section and it is jarring.Perhaps the results would be different if you had a specific novel design or interaction in mind, and you wanted the AI to implement that exactly as you wanted.
edit: My point proven by the other examples from this thread. Same format, same "feature cards" etc. https://bridge.ritza.co/ https://poolometer.com/
The landing page looks like every other AI slopped product page out there.
If you're AI developed software was so great, I couldn't tell it's AI.
Like I cannot wrap my head around how anyone vibe slops and think "No, this is good. I will now proudly show this off."
Go back to human devs.
> Yes ai can’t see, it only understands numbers.
I've also used AI to build frontends that I'm more than satisfied with, and I think it can "see" perfectly fine. The frontier models are multi-modal and pretty good at vision. You can hook up your coding harness to your browser which will take screenshots of your rendered frontend and modify the code accordingly.
> Ngl I’m reading this article after having used ai to build a beautiful front end that is pixel perfect.
Was about to say the same thing
> If you are really good at something, you'll find AI sucks at everything.
I think it's the other way around. AI amplifies your software development skills. If you suck at software development, AI will follow your prompts and feedback and of course it will output an unmaintainable mess that barely works.
Here we are, listening to people who can barely put together a working website complaining that AI can barely put together a working website.
Honestly, I'm probably one of the biggest skeptics when it comes to GenAI - but at least for music, the recent models (as in the past year) do not suck. They are actually really, really good for what it is.
I have yet to hear anything truly original produced by those models. They seem to converge to the mean, and end up sounding very commercial, very average sounding - but in the sense of average "professional music". Suno can generate music which would have taken real people years to learn, thousands of dollars of equipment to make / produce, and pretty much ready for airplay - most listeners will not bat an eye.
Hell, these "AI artists" have been booked to festivals, since people can't hear the difference, and are enjoying the music.
I figure it will go the same way in other fields. The average consumer loses track of what's human made and what's AI made, and frankly won't care. The people "left behind" are the artists, craftspeople, etc. that are frustrated it came to this point.
> If you are really good at something, you'll find AI sucks at everything.
Nah, just at that something :-)
Anyway.
Do people get the impression that LLMs are worse at frontend than not? I'd think it's same with other LLM uses: you benefit from having a good understanding of what you're trying to do; and it's probably decent for making a prototype quickly.
To quote the article:
1. "It trained on ancient garbage" which is the by product of massive churn and this attitude leads to even more churn
2. "It doesn't know WHY we do things" because we don't either... even the paradigms used in frontend dev have needlessly churned
My fix? I switched from React/Next to Vue/Nuxt. The React ecosystem is by far the worst offender.
Good design is not always logical. Color theory, if followed, results in pretty bad experiences. And interestingly, good design can't always be explained in a natural language.
Main thing is, it's very hard to get AI to have taste, because taste is not always statistically explainable.
The best I've gotten to is have it use something like ShadCN (or another well document package that's part of it's training) and make sure that it does two things, only runs the commands to create components, and does not change any stock components or introduce any Tailwind classes for colors and such. Also make it ensure that it maintains the global CSS.
This doesn't make the design look much better than what it is out of the box, but it doesn't turn it into something terrible. If left unprompted on these things, it lands up with mixing fonts that it has absolutely no idea if they look good or not, bringing serif fonts into body text, mixing and matching colors which would have looked really, really good in 2005. But just don't work any more.
>It's notoriously bad at math,
If you are going to criticize LLMs for being out of date, at least make sure your understanding isn't out of date.
Like I don't give it 100% responsibility for front end tasks but I feel like working together with AI I feel like I am really in control of CSS in a way I haven't been before. If I am using something like MUI it also tends to do really good at answering questions and making layouts.
Thing is, I don't treat AI as an army of 20 slaves will get "shit" done while I sleep but rather as a coding buddy. I very much anthropomorphize it with lots of "thank you" and "that's great!" and "does this make sense?", "do you have any questions for me?" and "how would you go about that?" and if makes me a prototype of something I will ask pointed questions about how it works, ask it to change things, change the code manually a bit to make it my own, and frequently open up a library like MUI in another IDE window and ask Junie "how do i?" and "how does it work when I set prop B?"
It doesn't 10x my speed and I think the main dividend from using it for me is quality, not compressed schedule, because I will use the speed to do more experiments and get to the bottom of things. Another benefit is that it helps me manage my emotional energy, like in the morning it might be hard for me to get started and a few low-effort spikes are great to warm me up.
but people writing shitty node.js code might beg to differ.
>
Try asking it for some scroll-driven animations or custom micro-interactionsUnrelated, but as a long time front-end dev, FUCK THOSE.