I am genuinely in the "target market" for a tool such as this, but having evaluated one previously I found the quality and self-hosting experience to be pretty bad, and that a proprietary freemium product was still a better experience.
I'm hesitant to even take a look at this project due to the whole "vibe coded in 3 weeks" thing, though. Hearing that says to me that this is not serious or battle-tested and might go unmaintained or such. Do you think these are valid concerns to have?
We're entering an era where the delivering of software is cheap. Basically any idea can have an MVP implemented by one or two people in just a month or two now. Very quickly the industry is learning what the next set of bottlenecks are, now that the bottleneck is no longer writing code.
Planning, design, management alignment, finding customers, integrating with other products, waiting for review, etc. Basically all the human stuff that can't be automated away.
Your comment reminds me to add building a support team to the list.
I agree, software (software startups) has always been the golden child of investors because of how cheap it is compared to hardware or any other physical good.
Good software is expensive regardless of the involvement of LLMs because you need someone to take responsibility. Large companies will save a buck because there may be fewer people needed to take said responsibility, but it's probably a marginal saving compared to the overall scheme of things.
It was "easy" in the sense you could deploy 7B equivalent of a developer, who could get something sort of working eventually or you could spend a lot of money for actually getting the results from talented developers - equivalent of daily maxing out Opus 4.6.
It's not. Maintainance is easy now. We have at least 200 legacy products in various old languages. Never it has been easier to work on those since Claude came into the picture. The argument I hear about support being expensive is not true I think.
The argument is the agents can maintain what the agents build. But someone has to manage the explosion in system complexity.
I just quit my job because there was top down explosion of shipping agentic code. I don't think it's going to work and I don't want my job to be maintaining someone else's 50x code output.
I agree. It's not like this project is disrupting an overpriced product/SaaS.
E.g. Buffer charges around $50 per year per social media account, which gives you an unlimited number of collaborating user accounts. And their single user plans are even cheaper.
I don't see how self-hosting would be a worthy investment of your time/effort in this case, unless you are in some grossly mismanaged organization where you have several devops engineers paid for doing literally nothing.
You are right. My memory failed me there. I should have done a quick lookup for the pricing.
It's $120/year/account for multi-user setup, and $60/year/account for single-user.
Which is still dirt cheap if you use social media professionally. E.g. what would $360 buy you if you try to do self-hosting? Maybe a day of work from a devops engineer to get this deployed for you?
i think we need to encode (or refine) what we mean by “vibe code.” my original impression was that it was used to describe the process whereby someone with an idea but lacking development/engineering skills leveraged llm via an agent to create the mechanics to bring their idea to fruition. anymore it seems like if it has the hint of AI then it’s “vibe coded.”
ironically, i didn’t read the article because i come to comments now to see if its been identified as AI slop, so i don’t know which area this falls into
Some people (me included) are trying to separate Vibe Coding (no idea about code, just give me the result) from Vibe Engineering (I know how to do this, but can't be arsed to write. I also know what the result should look like)
Cool.
Thus I am an ai/llm-assisted coder amateur. I don’t code for living, I know principles (or I think so) but don’t remember syntax (too old to learn new tricks :)
The era of sharing some small programs that you made with others to benefit from is over imo.
You can just vibe code it yourself. If your requirements are narrower (eg. you only need support for 3 networks and not 12), you will end up with something that takes less time to develop (possibly less than a day), it will have a smaller surface for problems, and it will be much better tailored to your specific needs. If you pay attention to what the LLM is doing it will also be easier to maintain or extend further.
The surface for security vulnerabilities also gets narrower, since you "only" have to trust the LLM (which is still a huge ask, but still better than LLM + 1 random person).
> The era of sharing some small programs that you made with others to benefit from is over imo.
> You can just vibe code it yourself.
+1.
The password manager I use full time now is “Kenpass”, which has exactly one user: me. I have it on iOS/macOS/Linux, browser extensions, CLI and (native, no electron) UI for each, syncs with my homelab server over my wireguard tunnel, and it covers all my use cases. Took me maybe a week (a few hours total, spread around.) I feel no reason to share it with anyone, it does exactly what I need and I only need to fix the bugs I find, for myself.
We’re really living in crazy times.
> If you pay attention to what the LLM is doing it will also be easier to maintain or extend further
That's another nice part: I actually really enjoy feng-shui refactoring code to fit my tastes, and I've given the LLM's code a bunch of refactoring passes essentially "just for fun". I understand the codebase enough that sometimes I implement features myself instead of having the LLM do it, if I'm in the mood to. But I'd probably never have the time or energy to start such a project from scratch... having the initial MVP done in essentially one shot was a huge boost.
> I have it on iOS/macOS/Linux, browser extensions, CLI and (native, no electron) UI for each, syncs with my homelab server over my wireguard tunnel, and it covers all my use cases. Took me maybe a week (a few hours total, spread around.)
A few hours including iOS? Don't you need to go through lots of hoops even if it's only for personal use? Isn't it time-limited or something?
It does require a developer account at $100 a year, yes. But that's it: Once you have that, the app is not time-limited or anything. I can just install it to my device (wirelessly) via Xcode and I'm done.
It's probably only worth the $100/year if there are other apps you want to make for yourself though. I have other ideas for apps for my family, so I figured it's worth the cost. It sucks that I have to pay at all though, I admit.
That's interesting - I thought apps that didn't go through the App Store but were in "developer mode" were time-limited, but maybe I confused it with "share with family" rather than pure personal solo
use.
Sounds like you wrote it in Swift(?), did it also do a good job at the autofill code? That's what I'd expect it to potentially struggle at, as it could be a rare and particularly native feature. But then the APIs should be well standardized for iOS.
The autofill code is actually Safari's, not the password manager. The password manager basically just needs to impmlement ASCredentialProviderViewController (https://developer.apple.com/documentation/authenticationserv...) and include the "AutoFill Credential Provider" entitlement and safari will ask if you want to sign in with Kenpass for websites and apps.
For firefox (desktop) it's a bit harder, you have to actually implement login box detection. It works like half the time, but I don't really mind much, because it's not a huge deal to just use the clipboard. Even when using 1Password I would so often find that the password autofill was so broken (or would constantly offer me to save passwords I don't want to, or for things that aren't even login boxes) that I actually actively hated it. When creating logins, I much prefer taking an explicit step to activate my password manager to do so, and when prompting for logging in, it's not a huge deal to simply copy/paste when the simple detection doesn't work.
Nah, my whole Github is open. I've always "shared" what I do.
I just don't go run around different social media sites advertising my stuff. I built it for myself, if someone else likes it, they can use it.
If they want a feature they are free to suggest it, but I'm also free to ignore their wishes completely if I don't need the feature. It's an open license, they can fork it.
> The surface for security vulnerabilities also gets narrower, since you "only" have to trust the LLM (which is still a huge ask, but still better than LLM + 1 random person).
On top of which, any such vulnerabilities will be mostly low value: n different implementations, each with their own idiosyncrasies, 90% of them serving one person.
My gut is -- of course these concerns are valid, but, especially with "newfangled" software projects like this, I'd genuinely be surprised to see major quality differences between "human" and "vibe-coded."
I think what I mean by "newfangled" is; this isn't a low-level C memory managed bit flipping thing; this is the sort of thing that's already built on top of layer of layer of the cruft that the web already is, for better or worse.
I think the same way, I'd love a social media management tool, all of the ones I found were insanely expensive or not usable / had horrible usability issues. Pitching a product by telling me it's built quickly with AI does the opposite of convincing me to try it, even though I'm in the market for the solution offered.
I think our industry really needs to get on top of terms / names and fast. To me, “vibe coded” means “100% (or close to it) of this was written by LLM and I do not have a slightest clue about technology or hacking or anything related to this domain.” if this is the case here (or anywhere else) I am not touching it with a 10-foot pole (even with honesty of the author). something like “LLM assisted” would be a whole other thing.
To me "vibe coding" is like Karpathy wrote in that tweet: You completely surrender to the model, accept everything, never look at the changes in code. Only feed prompts and check the resulting product.
But, of course, that's not how most programmers use - at least I'd like to think. One thing is when people with zero programming knowledge vibe code something in that fashion, but for serious (read: big) products I'd like to think that you're still forced to do some book keeping.
I've done some personal CRUD stuff like that. Worked well - I fixed bugs as I found them, only hit one thing it couldn't do (I find codex weak at front-end stuff generally).
Would never publish it though, or approach paid work like that.
You can vibe code minor fixes to some annoyances including the clanker managing the whole fork/pull request flow if you want to contribute back for $20/mo on codex or claude (though $20 is the free trial tier there, codex is nearly so since last week but should be good enough... for now).
I was hoping this was the opposite of a creators platform - a social media users platform. Download all social media to one place (stories/posts) where you can view on your own schedule.
Same. I was hoping for this. As much as social media frustrates me, the content can be great at times. I want an aggregation tool where I have strict control on the output. Give me the content minus the addictive never ending scroll with inflammatory posts. Basically, I want a benevolent curation of media on my terms.
Big upvote for this. I want an "agent" (overused term) to scroll through all the user-hostile feeds and present what I actually want to see in a calm, ad-free, manner.
Tools like instaloader are a start, but screen scraping might be the best bet to avoid detection/banning.
Platforms really don't want you to build that. But depends on what platforms your talking about, open ones like bsky and mastodon could allow for something like that
Nice! A bot-built tool for posting content mostly generated by other bots and engaged with by bots.
I don't mean to belittle the cool tool you made, I'm just grumpy about the loss of what the social network could have been and what we got when it morphed into social media.
I wanted to test how far AI coding tools could take a production project. Not a prototype. A social media management platform with 12 first-party API integrations, multi-tenant auth, encrypted credential storage, background job processing, approval workflows, and a unified inbox. The scope would normally keep a solo developer busy for the better part of a year. I shipped it in 3 weeks.
I broke the specs into tasks that could run in parallel across multiple agents versus tasks with dependencies that had to merge first. This planning step was the whole game. Without it, the agents produce a mess.
I used Opus 4.6 (Claude Code) for planning and building the first pass of backend and UI. Opus holds large context better and makes architectural decisions across files more reliably. Then I used Codex 5.3 to challenge every implementation, surface security issues, and catch bugs. Token spend was roughly even between the two.
Where AI coding worked well: Django models, views, serializers, standard CRUD. Provider modules for well-documented APIs like Facebook and LinkedIn. Tailwind layouts and HTMX interactions. Test generation. Cross-file refactoring, where Opus was particularly good at cascading changes across models, views, and templates when I restructured the permission system.
Where it fell apart: TikTok's Content Posting API has poor docs and an unusual two-step upload flow. Both tools generated wrong code confidently, over and over. Multi-tenant permission logic produced code that worked for a single workspace but leaked data across tenants in multi-workspace setups. These bugs passed tests, which is what made them dangerous. OAuth edge cases like token refresh, revoked permissions, and platform-specific error codes all needed manual work. Happy path was fine, defensive code was not. Background task orchestration (retry logic, rate-limit backoff, error handling) also required writing by hand.
One thing I underestimated: Without dedicated UI designs, getting a consistent UX was brutal. All the functionality was there, but screens were unintuitive and some flows weren't reachable through the UI at all. 80% of features worked in 20% of the time. The remaining 80% went to polish and making the experience actually usable.
The project is open source under AGPL-3.0. 12 platform integrations, all first-party APIs. Django 5.x + HTMX + Alpine.js + Tailwind CSS 4 + PostgreSQL. No Redis. Docker Compose deploy, 4 containers.
Ask me anything about the spec-driven approach, platform API quirks, or how I split work between the two models.
Really impressive to ship something this complete in 3 weeks. The multi platform scheduling problem is genuinely hard, especially around OAuth token refresh and rate limit handling across networks like LinkedIn and Instagram that have particularly strict policies.
Curious how you handled the queue and retry logic when a scheduled post fails partway through on one platform but not the others. Did you end up building a per platform state machine or something simpler?
Spec-first was the unlock for me too. The agent drifts badly without it. Your point about defensive code is spot on, happy path works fine, but retry logic, edge cases, error handling all needed manual work in my experience too.
The 80/20 observation is real. Functionality is fast, making it actually trustworthy takes most of the time.
I also added tons of unit-test, it helped ai agent to detect regression while producing code.
Lots of questions already about the project so Ill ask about your vibe coding experience. After the project has grown have you found it harder to upkeep the project with AI? Also did you use any structuring while working with Claude/Codex like first planning the tasks on separate documents, or did you just work with prompts and the context that were generated on the sessions?
AI generated README, AI generated code, and the creator can't even be bothered to write comments without generating them with AI (which btw goes against the rules of this site). How is this being upvoted - did the creator use his own tool to bot this submission?
131 comments
I'm hesitant to even take a look at this project due to the whole "vibe coded in 3 weeks" thing, though. Hearing that says to me that this is not serious or battle-tested and might go unmaintained or such. Do you think these are valid concerns to have?
Planning, design, management alignment, finding customers, integrating with other products, waiting for review, etc. Basically all the human stuff that can't be automated away.
Your comment reminds me to add building a support team to the list.
Good software is expensive regardless of the involvement of LLMs because you need someone to take responsibility. Large companies will save a buck because there may be fewer people needed to take said responsibility, but it's probably a marginal saving compared to the overall scheme of things.
The argument is the agents can maintain what the agents build. But someone has to manage the explosion in system complexity.
I just quit my job because there was top down explosion of shipping agentic code. I don't think it's going to work and I don't want my job to be maintaining someone else's 50x code output.
E.g. Buffer charges around $50 per year per social media account, which gives you an unlimited number of collaborating user accounts. And their single user plans are even cheaper.
I don't see how self-hosting would be a worthy investment of your time/effort in this case, unless you are in some grossly mismanaged organization where you have several devops engineers paid for doing literally nothing.
Consider having an account for each common social media platform, then multiply that for every project, that grows quickly.
It's $120/year/account for multi-user setup, and $60/year/account for single-user.
Which is still dirt cheap if you use social media professionally. E.g. what would $360 buy you if you try to do self-hosting? Maybe a day of work from a devops engineer to get this deployed for you?
Last time I “vibe coded” something (internal) and I liked it because I couldn’t find external solution.
I admire coders who can finish their code into deliverable and usable piece.
Issue here is software abundance and ppl will start to hesitate due to absurd pile that they should evaluate.
It reminds me the statistics of ice cream global sales. People want certainty so they choose chocolate or vanilla :)
Therefore many good software projects will have a problem to find users.
ironically, i didn’t read the article because i come to comments now to see if its been identified as AI slop, so i don’t know which area this falls into
Now however, many people just use it to mean any ai-assisted coding.
One person says “vibe coding” to mean a throwaway script to scrape some page. Others use it to mean a code reviewed app built by a team using Claude.
It is so broad as to be meaningless at this point
You can just vibe code it yourself. If your requirements are narrower (eg. you only need support for 3 networks and not 12), you will end up with something that takes less time to develop (possibly less than a day), it will have a smaller surface for problems, and it will be much better tailored to your specific needs. If you pay attention to what the LLM is doing it will also be easier to maintain or extend further.
The surface for security vulnerabilities also gets narrower, since you "only" have to trust the LLM (which is still a huge ask, but still better than LLM + 1 random person).
> The era of sharing some small programs that you made with others to benefit from is over imo. > You can just vibe code it yourself.
+1.
The password manager I use full time now is “Kenpass”, which has exactly one user: me. I have it on iOS/macOS/Linux, browser extensions, CLI and (native, no electron) UI for each, syncs with my homelab server over my wireguard tunnel, and it covers all my use cases. Took me maybe a week (a few hours total, spread around.) I feel no reason to share it with anyone, it does exactly what I need and I only need to fix the bugs I find, for myself.
We’re really living in crazy times.
> If you pay attention to what the LLM is doing it will also be easier to maintain or extend further
That's another nice part: I actually really enjoy feng-shui refactoring code to fit my tastes, and I've given the LLM's code a bunch of refactoring passes essentially "just for fun". I understand the codebase enough that sometimes I implement features myself instead of having the LLM do it, if I'm in the mood to. But I'd probably never have the time or energy to start such a project from scratch... having the initial MVP done in essentially one shot was a huge boost.
> I have it on iOS/macOS/Linux, browser extensions, CLI and (native, no electron) UI for each, syncs with my homelab server over my wireguard tunnel, and it covers all my use cases. Took me maybe a week (a few hours total, spread around.)
A few hours including iOS? Don't you need to go through lots of hoops even if it's only for personal use? Isn't it time-limited or something?
It's probably only worth the $100/year if there are other apps you want to make for yourself though. I have other ideas for apps for my family, so I figured it's worth the cost. It sucks that I have to pay at all though, I admit.
Sounds like you wrote it in Swift(?), did it also do a good job at the autofill code? That's what I'd expect it to potentially struggle at, as it could be a rare and particularly native feature. But then the APIs should be well standardized for iOS.
Very cool and inspiring stuff!
For firefox (desktop) it's a bit harder, you have to actually implement login box detection. It works like half the time, but I don't really mind much, because it's not a huge deal to just use the clipboard. Even when using 1Password I would so often find that the password autofill was so broken (or would constantly offer me to save passwords I don't want to, or for things that aren't even login boxes) that I actually actively hated it. When creating logins, I much prefer taking an explicit step to activate my password manager to do so, and when prompting for logging in, it's not a huge deal to simply copy/paste when the simple detection doesn't work.
> kenpass
> username "ninkendo"
Absolutely checks out.
Please share more ken related software names you use lol
I just don't go run around different social media sites advertising my stuff. I built it for myself, if someone else likes it, they can use it.
If they want a feature they are free to suggest it, but I'm also free to ignore their wishes completely if I don't need the feature. It's an open license, they can fork it.
> The surface for security vulnerabilities also gets narrower, since you "only" have to trust the LLM (which is still a huge ask, but still better than LLM + 1 random person).
On top of which, any such vulnerabilities will be mostly low value: n different implementations, each with their own idiosyncrasies, 90% of them serving one person.
I think what I mean by "newfangled" is; this isn't a low-level C memory managed bit flipping thing; this is the sort of thing that's already built on top of layer of layer of the cruft that the web already is, for better or worse.
But, of course, that's not how most programmers use - at least I'd like to think. One thing is when people with zero programming knowledge vibe code something in that fashion, but for serious (read: big) products I'd like to think that you're still forced to do some book keeping.
Would never publish it though, or approach paid work like that.
It can be done though. And i say it as a developer for 20 years now.
Is there anything like that out there?
Tools like instaloader are a start, but screen scraping might be the best bet to avoid detection/banning.
made me think of Pidgin, the chat client that could talk to any chat server
I have a few ideas if my own, perhaps yours is something that could be created.
I don't mean to belittle the cool tool you made, I'm just grumpy about the loss of what the social network could have been and what we got when it morphed into social media.
Before writing any code, I spent time on detailed specs, an architecture doc, and a style guide. All public: https://github.com/brightbeanxyz/brightbean-studio/tree/main...
I broke the specs into tasks that could run in parallel across multiple agents versus tasks with dependencies that had to merge first. This planning step was the whole game. Without it, the agents produce a mess.
I used Opus 4.6 (Claude Code) for planning and building the first pass of backend and UI. Opus holds large context better and makes architectural decisions across files more reliably. Then I used Codex 5.3 to challenge every implementation, surface security issues, and catch bugs. Token spend was roughly even between the two.
Where AI coding worked well: Django models, views, serializers, standard CRUD. Provider modules for well-documented APIs like Facebook and LinkedIn. Tailwind layouts and HTMX interactions. Test generation. Cross-file refactoring, where Opus was particularly good at cascading changes across models, views, and templates when I restructured the permission system.
Where it fell apart: TikTok's Content Posting API has poor docs and an unusual two-step upload flow. Both tools generated wrong code confidently, over and over. Multi-tenant permission logic produced code that worked for a single workspace but leaked data across tenants in multi-workspace setups. These bugs passed tests, which is what made them dangerous. OAuth edge cases like token refresh, revoked permissions, and platform-specific error codes all needed manual work. Happy path was fine, defensive code was not. Background task orchestration (retry logic, rate-limit backoff, error handling) also required writing by hand.
One thing I underestimated: Without dedicated UI designs, getting a consistent UX was brutal. All the functionality was there, but screens were unintuitive and some flows weren't reachable through the UI at all. 80% of features worked in 20% of the time. The remaining 80% went to polish and making the experience actually usable.
The project is open source under AGPL-3.0. 12 platform integrations, all first-party APIs. Django 5.x + HTMX + Alpine.js + Tailwind CSS 4 + PostgreSQL. No Redis. Docker Compose deploy, 4 containers.
Ask me anything about the spec-driven approach, platform API quirks, or how I split work between the two models.
Curious how you handled the queue and retry logic when a scheduled post fails partway through on one platform but not the others. Did you end up building a per platform state machine or something simpler?
I also added tons of unit-test, it helped ai agent to detect regression while producing code.
Technically that forces anybody who would modify it/use it as a baseline for their own product to share the source code.
Is this a purposeful decision?