Instant 1.0, a backend for AI-coded apps (instantdb.com)

by stopachka 124 comments 215 points
Read article View on HN

124 comments

[−] storus 36d ago
An honest question - why would we need any frameworks at all for vibe coded apps? I can just tell the coding agent to use pure HTML5/Vanilla JS/CSS on the frontend and pure whatever on the backend and it would do it. No need for hundreds/thousands dependencies. For deployment I can ask the coding agent to do the same.
[−] debazel 35d ago
My experience with actually trying this is that current LLMs benefit greatly from having a framework to build on.

More code in the context window doesn't just increase the cost, it also degrades the overall performance of the LLM. It will start making more mistakes, cause more bugs, add more unnecessary abstractions, and write less efficient code overall.

You'll end up having to spend a significant amount of time guiding the AI to write a good framework to build on top of, and at that point you would have been better off picking an existing framework that was included in the training set.

Maybe future LLMs will do better here, but I wouldn't recommend doing this for anything larger than a landing page with current models.

[−] jauntywundrkind 34d ago
Yes. Intent and patterns will be much clearer for future sessions. If you have a WORN situation (write once read never (modify never, including by the AI)) perhaps you can skip layering and just big ball of mud your system. I doubt many people want that.
[−] fny 35d ago
Why not code in assembly?

I kid but any reason you can think of applies to app development too.

1. Good abstractions decrease verbosity and improve comprehension

2. Raw HTML/CSS/JS are out of distribution just like assembly (no one builds apps like this)

3. Humans need to understand and audit it

4. You'll waste time and tokens reinventing wheels

This inuitively makes sense. LLMs mimic human behavior and thought, so for all the reasons you'd get lost in a pile of web spaghetti or x86, so would an LLM.

[−] monooso 35d ago

> Raw HTML/CSS/JS are out of distribution just like assembly (no one builds apps like this)

Plenty of people build apps with vanilla CSS and JS (and HTML is just HTML). It's a really nice way to work.

Here are a few links to get you started.

https://dev.37signals.com/modern-css-patterns-and-techniques...

https://simonwillison.net/2025/May/31/no-build/

https://bradfrost.com/blog/post/raw-dogging-websites/

[−] jauntywundrkind 34d ago
It's also a good opportunity to use the platform, to get layering naturally.

> I keep seeing Lit gain adoption in gen-UI.

> Lit's plain JS/TS, no-build-required approach is easy for LLMs to generate and for harnesses to integrate.

> If a gen-UI pre-viewer can load standard JS modules, it can load Lit components. No custom Angular, Vue, or Svelte toolchain integration needed.

- Lit author, Justin Fagnani, https://bsky.app/profile/justinfagnani.com/post/3mj376ogels2...

[−] solumos 35d ago
Plenty of people, but how many companies?

Plenty of people bake bread from scratch without a mixer, but few (if any) bakeries do.

[−] monooso 34d ago
Companies are comprised of people.
[−] stopachka 36d ago
A few reasons:

1. Unlimited projects: when you spin up traditional backends, you usually use VMs. It's expensive to start many of them. With Instant, you create unlimited projects

2. User experience: traditional CRUD apps work, but they don't feel delightful. I you want to support features like multiplayer, offline mode, or optimistic updates, you'll have to write a lot more custom infra. Instant gives you these out of the box, and the agents find it easier to write than CRUD code

3. Richer features: sometimes you'll want to add more than just a backend. For example, maybe you want to store files, or share cursors, or stream tokens across machines. These often need more bespoke systems (S3, Redis, etc). Instant already comes with these out of the box, and the agents know how to use them.

There are a few demo sections in the post that show this off. For example, you can click button and you'll get a backend, without needing to sign up. And in about 25 lines of code, you'll make a real-time todo app.

[−] boxedemp 36d ago

>multiplayer

How does it compare to photon networking? I've been using photon and webrtc mostly. I haven't had any issues, but I'm always interested in finding better solutions!

[−] stopachka 36d ago
Photon looks interesting! I am not too familiar with it, but from what I understand Photon and WebRTC are for communicating messages between clients. Those messages can be very fast, because they aren't blocked by writes to disk. Instant has two similar services, Presence & Streams. The primary sync engine is more for storing relational data.
[−] boxedemp 28d ago
So not for real time applications?

Still very interesting, thank you for the information!

[−] ehnto 35d ago
Echoing other thoughts here but also, it's like getting your first 10,000+ lines of output code for 0 token cost, and no prompting effort, no back and forth or testing etc.

Just jump straight to business logic, scaffolding is done for you already.

I think in your question as well is an idea that apps from now on will be bespoke, small and unique entities but the truth is we are still going to be mostly solving already solved problems, and enterprise software will still require the same massive codebases as before.

The real win of frameworks is they keep your workers, AI or human, constrained to an existing known set of tools and patterns. That still matters in long term AI powered projects too. That and they provide battle hardened collection of solutions that cover lots of edge cases you would never think to put in your prompts.

[−] calvinmorrison 36d ago
same reasons human do. context and abstraction.
[−] storus 36d ago
I can get rid of irrelevant abstraction bloat that way and make code perfectly tailored for what is needed. This was traditionally expensive which led to abstraction being packaged in frameworks.
[−] bdcravens 35d ago
It probably has more to do with the body of knowledge it draws on than the suitability. I assume it burns more tokens to stay vanilla, as more effort is involved in implementing patterns more readily available "off the shelf".
[−] Scholmo 35d ago
Its build in guard rails.

Its potentially also reduced context it has to know.

Its also a very good way to scale. Lets build a very small and well tested library for x, llm uses x for case y. it doesn't have to worry about x, its content, its security.

[−] storus 35d ago
Security is questionable - if there is a framework hack available or supply-chain attack, you are in danger. However, if you have everything coded from scratch, then good luck to any attacker wasting time on your custom code instead of just exploiting something at scale running on computers everywhere.
[−] gervwyk 35d ago
simple. you reduce the surface that you need to manage and shift that responsibility to the framework.

choose a good one and your can save you and your llms 1000s of decisions and future maintenance overheads.

frameworks exists because they scale.

[−] IncreasePosts 36d ago
You don't necessarily, but each token costs money for the AI to spit out. And probably more money when that output is used as input later. Delegating to a library makes sense financially.
[−] storus 36d ago
With local inference on pretty decent local models we have nowadays (Qwen-3.5 and better) it's not much of a concern anymore.
[−] IncreasePosts 35d ago
Sure it is - there's still an opportunity cost of spending tokens(time/energy) creating a library from scratch vs using a preexisting well understood API.
[−] walthamstow 35d ago
Sure, if you've got a £5k laptop
[−] Bishonen88 35d ago
what percentage of people is using local models for anything serious? I reckon single digits if even that. And for a corporate work environment, probably close to 0.
[−] insane_dreamer 35d ago
validation -- you're using already validated components instead of relying on the LLM to create them and ensure they are validated
[−] stepan_l 35d ago
[dead]
[−] asdev 36d ago
I wonder if people really need this. How many people are really building multiplayer apps like Figma, Linear etc? I'm guessing 99% are CRUD and I doubt that will change. Even if so, would you want to vendor lock into some proprietary technology rather than build with tried and tested open source components?
[−] stopachka 36d ago

> really building multiplayer apps like Figma, Linear

I think there's two surprises about this:

1. If it was easier to make apps multiplayer, I bet more apps would be. For example, I don't see why Linear has to be multiplayer, but other CRUD apps don't.

2. When the abstraction is right, building apps with sync engines is easier than building traditional CRUD apps. The Linear team mentioned this themselves here: https://x.com/artman/status/1558081796914483201

[−] nezaj 36d ago
For what it's worth, Instant is 100% open source!

https://github.com/instantdb/instant

[−] risyachka 36d ago
Yeah I kinda agree. Considering llms write most of the code today, the need for fancy tech is lower than ever. A good old crud app looks like a perfect fit for ai - its simple, repetitive and ai is great at sql. Go binary for backend and react for frontend - covers 99.9% use cases with basically zero resource usage. 5 usd node will handle 100k mau without breaking a sweat.
[−] nharada 36d ago
This is super cool and exactly what I've been looking for for personal projects I think. I wanna try it out, but the "agent" part could be more seamless. How does my coding agent know how to work this thing?

I'd suggest including a skill for this, or if there's already one linking to it on the blog!

[−] dalmo3 36d ago
Congrats on the launch!

InstantDB is a joy to work with. Granted, I've only ever built small toy projects with it, but it's my go-to. Just so much simpler than anything else I've tried in this space.

The core product is so good that the AI emphasis feels weird. Hopefully that's just marketing and not a pivot. Unfortunate if that's what it takes to get funding these days.

[−] jamest 36d ago
They actually deliver on the promise of "relational queries && real-time," which is no small feat.

Though, their console feels like it didn't get the love that the rest of the infra / website did.

Congrats on the 1.0 launch! I'm excited to keep building with Instant.

[−] rock_artist 35d ago
Apologies if it's due to lack of understanding from my end. But why is it 'for AI-coded'?

Don't get me wrong, as I look for some simple backend for an app I plan it does look another awesome alternative.

But what I'm not understanding is, what makes it 'ai-coded' focused?

And least, vs. other backends, it seems to be TS focused. do you have plans to have some drop-in bindings for (native) mobile platforms?

[−] solarkraft 35d ago
What a delightful demonstration. The AI hook is brilliant, but under-explained. I expected a paragraph on how to quickly get running with my assistant (edit: https://www.instantdb.com/tutorial seems to be that, though it focuses on using the SaaS and creating an account).

It’s fun to see highly reactive apps converging on Triples, Datalog and Clojure. I never got very warm with Clojure and find Datalog a little weird, so Instant‘s abstractions are highly welcome! Surely the InstantQL-Datalog translator will be usable as a separate component, this would be super useful for me.

In general, this sounds like a lot of things done right. It sounds VERY close to just what I wish existed.

I guess I understand the choice of Postgres since this seems to be primarily focused on being SaaS. It might not matter much when the backend is Clojure anyway, but an embedded database like SQLite might make it a little simpler to deploy locally.

[−] truetraveller 36d ago
Moose here, congrats! This is the real "firebase alternative", not Supabase. Supabase is good, but it's just hosted Postgres + user login. People who are asking "why would I need this" don't get how difficult scalable data storage is with user permissions. Would absolutely recommend this if it's good. Been in this space for ~7 years, and made a high-perf realtime DB as well. Will contact you guys directly. Some concerns for everyone's benefit:

1) Transparency on pricing: This builds confidence. Need to know exactly what I pay for additional egress/ops (read/write). "unlimited" is not sustainable for the provider (you). For example, Firestore has detailed pricing that makes scaling sustainable for them. see https://cloud.google.com/firestore/pricing.

2) Transparency on limits: Req/s, max atrributes, max value length, etc. What about querying non-local non-indexed data (e.g. via server-side call), that's costly for you guys. So, what's the limit?

3) Simpler code in the docs/examples overall. Currently, they're not bad, but not great. For example, change the "i" used everywhere to "inst" or "idb". Assume dev is a noob!

4) Change simplify/terminology used. This is probably the most important, but hardest thing. Internally, keep the same triple structure. But dev just cares about tables/key/val. Or tables+rows. Namespaces/entities are confusing. Also, be consistent/clear. For example: "Namespaces are equivalent to "tables" in relational databases"..Perhaps you meant "namespaces are just a list of tables/entities"..slighly different, but far clearer I think? "Attributes are properties associated with namespaces"...I though attributes are associated with entities? Please keep in mind, I am completely new to InstantDB, so need to study the architecture more.

5) Simplify docs BIG TIME. And add an API REFERENCE (super important). Right now, you have: Tutorials, Examples, Recipes, Docs, Essays. These are all essentially "docs".

6) Simplify the "About" section. Should be 1/10th the size. Right now, it's like a fruit salad of docs, and re-iterating the features/benefits. Instead, put pics of both founders. Maybe investor list. Pics of your office?

[−] ghm2199 36d ago
One thing I have always wanted to do is cancel an AI Agent executing remotely that I kicked off as it streamed its part by part response(part could words, list of urls or whatever you want the FE to display). A good example is web-researcher agent that searches and fetches web pages remotely and sends it back to the local sub-agent to summarize the results. This is something claude-code in the terminal does not quite provide. In Instant would this be trivial to build?

Here is how I built it in a WUI: I sent SSE events from Server -> Client streaming web-search progress, but then the client could update a x box on "parent" widget using the id from a SSE event using a simple REST call. The id could belong to parent web-search or to certain URLs which are being fetched. And then whatever is yielding your SSE lines would check the db would cancel the send(assuming it had not sent all the words already).

[−] ghm2199 36d ago
For people like me — who are kind of familiar with how react/jetpack compose/flutter like frameworks work — I recall using react-widget/composables which seamlessly update when these register to receive updates to the underlying datamodel. The persistence boundary in these apps was the app/device where it was running. The datamodel was local. You still had to worry about making the data updates to servers and back to get to other devices/apps.

Instant crosses that persistence boundary, your app can propagate updates to any one who has subscribed to the abstract datastore — which is on a server somewhere, so you the engineer don't have to write that code. Right?

But how is this different/better than things like, i wanna say, vercel/nextjs or the like that host similar infra?

[−] chabad360 35d ago
This seems really cool, I've used Pocketbase before for similar purposes, the only thing it doesn't offer is local-first (which is a bit of a bummer). But one very valuable feature it has is server extensibility. You can write server hooks in JS and Go (it's embeddable as a Go module) to add features you need. For example, in a previous project, I added functionality to send push notifications based on certain user actions. Is this kind of thing possible in InstantDB? Or, would I need to build a worker that listens for those events and fires off the notification on its own?

Also, are there plans to release SDKs in other languages (namely, Dart)?

[−] satvikpendem 35d ago
Conflict resolution for real time simultaneous updates, how do you resolve them? I use a CRDT for solving precisely this problem but seems like most multiplayer database services don't actually handle this correctly, using last write wins instead.
[−] chrysoprace 36d ago
Is InstantDB no longer about local-first or is the AI angle just a marketing thing?
[−] mohsen1 35d ago
Congratulations on this launch. Stepan and his cofounder have been working on this for years and great to see it being launched finally. I should be building something fun with this.

Sending you guys lots of love and best of luck!

[−] sbochins 35d ago
This like many other attempts at this type of thing don’t understand the in distribution vs out of distribution aspect of ml models. Using something the model has the most training on will have the best outcomes. A new handcrafted programming language will perform significantly worse than a poorly thought out programming language that is widely used. Everything is going to revert to the mean as llms continue to progress.
[−] owenthejumper 36d ago
This is basically a fancy Pocketbase / Supabase?
[−] kenrick95 36d ago
Congrats on the 1.0 milestone!

I had a Show HN that built with Instant: https://news.ycombinator.com/item?id=44247029 The common request from that thread was to add guest auth, and few months later Instant had it baked in, so it was really easy to add that feature. Great dev experience :)

[−] saberience 35d ago
Wait, why is this needed at all? Why is this backend AI specific?

Your Claude Code or Codex is already an expert in all existing backends and databases, we don't need a new backend for AI.

You can literally ask Claude to pick whatever backend it thinks it best, and it will build, deploy, and work fine and be significantly cheaper than Instant 1.0.

[−] patwolf 35d ago
I'm currently working on an app that needs offline support, and I wish I had something like this when I started.

One of our requirements though is to be able to completely host in our own infrastructure. I know this is open source, but it would be nice if there was a simple path to self hosting.

[−] dewey 35d ago
Maybe I'm missing something, but wouldn't using the most popular open source framework + most popular open source database be a much better fit as there's much more training data and you don't lock yourself into yet another framework?

I'd just go with Rails + Postgres and have all the documentation and options open to me.

[−] pugio 36d ago
Thanks, this might be exactly what I'm looking for.

I see you have support for vanilla js and svelte, but it's unclear whether you can get all the same functionality if you don't use React. Is React the only first class citizen in this stack?

[−] tarcon 35d ago
Do we really have no way to build this in a single programming language and base database?

IndexDB, Postgres, Javascript, Typescript, Clojure. Not bad, but not much more attractive than the usual technology zoo any startup seems to end up with.

[−] rambrrest 35d ago
Sounds backward to me - why would you need a backend for AI coded apps - if you create a new backend - you also have to 'teach' the LLM how to use it - how do you imagine doing that ?
[−] seyz 35d ago
Nice launch! It reminds me a lot RootCX (https://github.com/RootCX/RootCX)
[−] jvalencia 36d ago
How does security and isolation work? If someone else's account is compromised, how do I know I won't be? If instant is compromised, how do I know I won't be?
[−] nl 36d ago
What does "X Team Members per app" mean? Is this the number of users you can have registered or does "Team Member" have special meaning?
[−] mentalgear 35d ago
No e2e encryption and no p2p is a deal breaker for me.
[−] shay_ker 36d ago
with a huge multi-tenant database, how do you deal with noisy neighbors? indexes are surely necessary, which impose a non-trivial cost at scale.
[−] d0100 36d ago
Any example more complex in the backend?

Are we supposed to expose all entities and relationships and rely on row level security?

[−] 4b11b4 35d ago
Elixir, a language for AI-coded apps
[−] ladon86 36d ago
Looks very nice! I'll give it a spin for prototypes.

Would love to check out /docs but it's currently a 404.

[−] jgeurts 36d ago
Is there a way to pair this with an existing (Postgres) database?
[−] reassess_blind 34d ago
Is there rate limiting? Cant find anything in the docs.
[−] aboodman 36d ago
Congrats, Instant team. Genuinely happy for y'all.
[−] zbiggistardust 36d ago
What's the difference between Instant and Convex?
[−] 2001zhaozhao 35d ago
Wow, the demo-in-a-blogpost is really impressive.
[−] singpolyma3 36d ago
Seems like a supabase competitor?
[−] scoopdewoop 35d ago
Man, its hard not to have a reaction to another BaaS. Every "pricing" page really goes to show that engineering took a backseat to rent-seeking.

There is an incentive to take advantage of users ignorance rather than instruct them. There is no incentive to make self-hosting easy or secure or sustainable.

This is literally $600/month for 250GB of storage with no SLA. Cool "value add" bro.

[−] vincnetas 35d ago
Can i selfhost?
[−] minantom 36d ago
how is this better than vercel?
[−] rylan-talerico 36d ago
congrats!
[−] richie-phillips 31d ago
[dead]
[−] raffaeleg 35d ago
[flagged]
[−] sanghyunp 36d ago
[dead]
[−] kuzivaai 34d ago
[dead]
[−] tayk47999 36d ago
[dead]
[−] marsven_422 35d ago
[dead]