Show HN: sllm – Split a GPU node with other developers, unlimited tokens (sllm.cloud)

by jrandolf 104 comments 188 points
Read article View on HN

104 comments

[−] freedomben 41d ago
This is an excellent idea, but I worry about fairness during resource contention. I don't often need queries, but when I do it's often big and long. I wouldn't want to eat up the whole system when other users need it, but I also would want to have the cluster when I need it. How do you address a case like this?
[−] jrandolf 41d ago
We implement rate-limiting and queuing to ensure fairness, but if there are a massive amount of people with huge and long queries, then there will be waits. The question is whether people will do this and more often than not users will be idle.
[−] mogili1 41d ago
Rate limit essentially is a token limit
[−] ibejoeb 41d ago
It depends on how it's implemented. If it's a fixed window, then your absolute ceiling is tokens/windows in a month. If it's a function of other usage, like a timeshare, you're still paying for some price for a month and you get what you get without paying more per token. There's an intrinsic limit based on how many tokens the model can process on that gpu in a month anyway, even if it's only you.
[−] delusional 41d ago
Time x capacity is also a limit. There's always a limit.
[−] freedomben 41d ago
Is there any way to buy into a pool of people with similar usage patterns? Maybe I'm overthinking it, but just wondering
[−] ssl-3 41d ago
I think it'd be best to pool with people with different patterns, not the same patterns. Perhaps it would be best to pool with people in different timezones, and/or with different work/sleep schedules.

If everyone in a pool uses it during the ~same periods and sleeps during the ~same periods, then the node would oscillate between contention and idle -- every day. This seems largely avoidable.

(Or, darker: Maybe the contention/idle dichotomy is a feature, not a bug. After all, when one has control of $14k/month of hardware that is sitting idle reliably-enough for significant periods every day, then one becomes incentivized to devise a way to sell that idle time for other purposes.)

[−] vineyardmike 41d ago
This is basically why the big companies can sell subscriptions for cheaper than API costs. First priority can go to API users, lower priority subscription users get slotted in as space/SLO allows, and then sell the remaining idle GPU to batch users and spare training. Oh and geography shift as necessary for different nations working hours.
[−] petterroea 41d ago
To be fair this is the price you pay for sharing a GPU. Probably good for stuff that doesn't need to be done "now" but that you can just launch and run in the background. I bet some graphs that show when the gpu is most busy could be useful as well
[−] pokstad 41d ago
This problem sounds like an excellent opportunity. We need a race to the bottom for hosting LLMs to democratize the tech and lower costs. I cheer on anyone who figures this out.
[−] mememememememo 41d ago
This is classic queuing theory, rate limits etc. I don't have an answer but I would look there.
[−] zozbot234 41d ago
Ultimately the most sensible way of handling this is you end up with "surge pricing" for the highest-priority tokens whenever the inference platform is congested, over and above the base subscription (but perhaps ultimately making the subscription a bit cheaper).
[−] cyanydeez 41d ago
Also, cache ejection during contention qill degrade everyones service.

I question whether they actually understand LLMs at scale.

[−] QuantumNomad_ 41d ago

> How does billing work?

> When you join a cohort, your card is saved but not charged until the cohort fills. Stripe holds your card information — we never store it. Once the cohort fills, you are charged and receive an API key for the duration of the cohort.

Have any cohorts filled yet?

I’m interested in joining one, but only if it’s reasonable to assume that the cohort will be full within the next 7 days or so. (Especially because in a little over a week I’m attending an LLM-centered hackathon where we can either use AWS LLM credits provided by the organizer, or we can use providers of our own choosing, and I’d rather use either yours or my own hardware running vLLM than the LLM offerings and APIs from AWS.)

I’d be pretty annoyed if I join a cohort and then it takes like 3 months before the cohort has filled and I can begin to use it. By then I will probably have forgotten all about it and not have time to make use of the API key I am paying you for.

[−] RIMR 41d ago
I read the FAQ, and I can't imagine this is going to work the way you want it to. It fundamentally doesn't make sense as a business model.

I can sign up for a cohort today, but there's not even a hint of how long it will take the cohort to fill up. The most subscribed cohort is only at 42% (and dropping), so maybe days to weeks? That's a long time to wait if you have a use case to satisfy.

And then the cohort expires, and I have to sign up for another one and play the waiting game again? Nobody wants that level of unreliability.

Also, don't say "15-25 tok/s". That is a min-max figure, but your FAQ says that this is actually a maximum. It makes no sense to measure a maximum as a range, and you state no minimum so I can only assume that it is 0 tok/s. If all users in the cohort use it simultaneously, the best they're getting is something like 1.5 tok/s (probably less), which is abyssmal.

You mention "optimization", but I have no idea what that means. It certainly doesn't mean imposing token limits, because your FAQ says that won't happen. If more than 25 users are using the cohort simultaneously, it is a physical impossibility to improve performance to the levels you advertise without sacrificing something else, like switching to a smaller model, which would essentially be fraud, or adding more GPUs which will bankrupt you at these margins. With 465 users per cohort, a large chunk of whom will be using tools like OpenClaw, nobody will ever see the performance you are offering.

The issue here is you are trying to offer affordable AI GPU nodes without operating at a loss. The entire AI industry is operating at a loss right now because of how expensive this all is. This strategy literally won't work right now unless you start courting VCs to invest tens to hundreds of millions of dollars so you can get this off the ground by operating at a loss until hopefully you turn a profit at some point in the future, but at that point developers will probably be able to run these models at home without your help.

[−] MuffinFlavored 41d ago

> Running DeepSeek V3 (685B) requires 8×H100 GPUs which is about $14k/month. Most developers only need 15-25 tok/s.

> deepseek-v3.2-685b, $40/mo/slot for ~20 tok/s, 465 slots total

> 465 users × 20 tok/s = 9,300 tok/s needed

> The node peaks at ~3,000 tok/s total. So at full capacity they can really only serve:

> 3,000 ÷ 20 = 150 concurrent users at 20 tok/s

> That's only 32% of the cohort being active simultaneously.

[−] mmargenot 41d ago
This is a great idea! I saw a similar (inverse) idea the other day for pooling compute (https://github.com/michaelneale/mesh-llm). What are you doing for compute in the backend? Are you locked into a cohort from month to month?
[−] kaoD 41d ago
How is the time sharing handled? I assume if I submit a unit of work it will load to VRAM and then run (sharing time? how many work units can run in parallel?)

How large is a full context window in MiB and how long does it take to load the buffer? I.e. how many seconds should I expect my worst case wait time to take until I get my first token?

[−] jrandolf 39d ago
Thanks to everyone who shared feedback. We’re implementing it now.

Here’s what’s changed:

- We’ve removed the other LLMs for now and are focusing entirely on Qwen 3.5. We’ll bring back additional smaller models later, but most usage was already concentrated on Qwen 3.5.

- Pricing is now around $50. You get roughly 2× the throughput (61 tok/s vs. 31 tok/s, verified in testing), and it’s still unlimited. For context, that’s about 158M tokens per month. Comparable providers like Novita charge around $3.2 per million tokens, so this comes out to roughly 10% of typical token costs.

- Context size is now capped at 32K tokens. For the vast majority of use cases, this is more than sufficient.

[−] peter_d_sherman 41d ago
What a brilliant idea!

Split a "it needs to run in a datacenter because its hardware requirements are so large" AI/LLM across multiple people who each want shared access to that particular model.

Sort of like the Real Estate equivalent of subletting, or splitting a larger space into smaller spaces and subletting each one...

Or, like the Web Host equivalent of splitting a single server into multiple virtual machines for shared hosting by multiple other parties, or what-have-you...

I could definitely see marketplaces similar to this, popping up in the future!

It seems like it should make AI cheaper for everyone... that is, "democratize AI"... in a "more/better/faster/cheaper" way than AI has been democratized to date...

Anyway, it's a brilliant idea!

Wishing you a lot of luck with this endeavor!

[−] dockerd 39d ago
I received an email mentioning that earlier cohorts are canceled.

Apparently, the earlier pricing was better for us as customers because I had the option to opt for a lower price, i.e $10 per month with a one month commitment and see how the platform evolves and then sign up for other models post testing as needed.

I am not sure how long this new cohort will take to fill now. A slightly better option (looking back) would have been to take multiple options from the customer list and start with the one that meets the threshold.

[−] artificialprint 41d ago
Didn't make sense to launch multiple 10 and 40 bucks subscriptions right at the start, because now they are competing with each other.

Also mobile version is a bit broken, but good idea and good luck!

[−] varunr89 41d ago
$40/mo for deepseek r1 seems steep compared to a pro sub on open ai /claude unless you run 24x7. im not sure how sharing is making this affirdable.
[−] Lalabadie 41d ago
This is the most "Prompted ourselves a Shadcn UI" page I've seen in a while lol

I dig the idea! I'm curious where the costs will land with actual use.

[−] vova_hn2 41d ago
1. Is the given tok/s estimate for the total node throughput, or is it what you can realistically expect to get? Or is it the worst case scenario throughput if everyone starts to use it simultaneously?

2. What if I try to hog all resources of a node by running some large data processing and making multiple queries in parallel? What if I try to resell the access by charging per token?

Edit: sorry if this comment sounds overly critical. I think that pooling money with other developers to collectively rent a server for LLM inference is a really cool idea. I also thought about it, but haven't found a satisfactory answer to my question number 2, so I decided that it is infeasible in practice.

[−] OJFord 40d ago
Especially with only 1mo commitment, what happens if there's a lot of churn after the first month – more people leave a cohort than are waiting for one? The whole cohort is then waiting for it to fill again before it restarts? And will people waiting for the next cohort to fill automatically be reassigned to the last (now not full) one anyway, or would there then be multiple partially filled cohorts for a single spec?

I like the idea, I just wouldn't want my subscription to suddenly be on hold because a peer decided to stop theirs.

[−] p_m_c 41d ago
Do you own the GPUs or are you multiplexing on a 3rd party GPU cloud?
[−] singpolyma3 41d ago
25 t/s is barely usable. Maybe for a background runner
[−] moralestapia 41d ago
This is great, thanks!

I personally would like something like this but with "regular" GPU access. Some people still use them for something other than LLMs ^^.

[−] dreamdayin9 40d ago
what is the main moat of your idea? privacy? otherwise it looks like a less flexible API compared to what chutes.ai or openrouter.ai providing. and they have TEE instances, which are more private. also why did u decide on launching V3 instead of some much more exciting models revealed recently like MiMo-V2-Pro or Arcee's Trinity Large?
[−] yoavm 41d ago

> Prices start at $5/mo for smaller models.

Is there actually any $5/mo offering? It seems like the cheapest models start at $10.

[−] pgbouncer 40d ago
1 week or even 1 day windows would be great, especially just to test it at this early stage
[−] spuz 41d ago
It seems crazy to me that the "Join" button does not have a price on it and yet clicking it simply forwards you to a Stripe page again with no price information on it. How am I supposed to know how much I'm about to be charged?
[−] cedws 33d ago
Just came back to this and saw it's shutting down. Unfortunate.
[−] scottcha 41d ago
Pretty cool idea, but whats the stack behind this? As 15-25 tok/s seems a bit low as expected SoA for most providers is around 60 tok/s and quality of life dramatically improves above that.
[−] avereveard 41d ago
Interesting there's a trickle of low intensity job one can always get running but like glm own plan is $30/mo and something about 300tps now I know that one is subsidized but still.
[−] spuz 41d ago
Is this not a more restricted version of OpenRouter? With OpenRouter you pay for credits that can be used to run any commercial or open-source model and you only pay for what you use.