Cloudflare's AI Platform: an inference layer designed for agents (blog.cloudflare.com)

by nikitoci 95 comments 307 points
Read article View on HN

95 comments

[−] mips_avatar 28d ago
So it's basically just openrouter with cloudflare argo networking? I feel like they could do some much more interesting stuff with their replicate acquisition. Application specific RL is getting so good but there's no good way to deploy these models in a scalable way. Even the providers like fireworks which claim to let you deploy LORAs in a scalable way can't do it. For now I literally have to host base load on my application on a rack of 3090s in my garage which seems silly but it saves me $1k a month.
[−] bryden_cruz 28d ago
Running a rack of 3090s in your garage to avoid provider lock-in/costs is the most Hacker News thing. Out of curiosity, what are you doing for uptime/failover? If you are running production traffic to that garage rack, does your app just degrade gracefully if your home internet drops, or do you have a cloud fallback?
[−] mips_avatar 27d ago
Yeah the model i'm running locally is just one of several models the app supports and it falls back to others if not available.
[−] handfuloflight 28d ago
[flagged]
[−] jonfromsf 28d ago
Gilfoyle? Is that you?
[−] mips_avatar 28d ago
I think these gpus were actually used for bitcoin mining before I bought them
[−] menno-sh 27d ago
It's Anton's grandson!
[−] vladgur 28d ago
Curious which models are you able to run and how many 3090s do they require at scale?
[−] mips_avatar 28d ago
4 3090s with nvlinks on each pair. Super fast inference on Moe models around 20-36b
[−] embedding-shape 28d ago

> Super fast inference

How fast is "super fast" exactly, and with what runtime+model+quant specifically? Curious to see how how 4x 3090s compare to 1x Pro 6000, could probably put together 4x 3090s for a fraction of the cost compared to the Pro 6000, but the times I've seen the tok/s in/out for multiple GPUs my heart always drops a little.

[−] mips_avatar 27d ago
I haven't benchmarked against a pro 6000, it's more that i have 4 3090s and i don't have a pro 6000.
[−] embedding-shape 27d ago
Yes, that's why I'm asking you what exactly 4 3090s get in prompt-processing and generation, sorry if I was unclear.
[−] mips_avatar 27d ago
Maxes out around 4K tok/s output. Each pair of 3090s has its own instance of the model with parallelism across the nvlink bridge. Though nvlink is only 2x over pcie5
[−] ascorbic 28d ago
The interesting part is that you can use the same API with Workers AI models (hosted at the edge) and proxied models (OpenRouter-style).

Disclaimer: I work at Cloudflare, but not on this.

[−] mips_avatar 27d ago
It's the same problem as fireworks, the only models supporting LORA are like year old dense models that perform horribly on most tasks. If you want to do anything close to relevant you still need to rent/own dedicated GPUs, which seems insane to me when vLLM fully support dynamic LORA loading.
[−] whereistejas 29d ago
This actually looks very useful. Cloudflare seems to be brining together a great set of tools. Not to mention, D2 is literally the only sqlite-as-a-service solution out there whose reliability is great and free tier limits are generous.
[−] eis 28d ago
D1 reliability has been bad in our experience. We've had queries hanging on their internal network layer for several seconds, sometimes double digits over extended periods (on the order of weeks). Recently I've seen a few times plain network exceptions - again, these are internal between their worker and the D1 hosts. And many of the hung queries wouldn't even show up under traces in their observability dashboard so unless you have your own timeout detection you wouldn't even know things are not working. It was hard to get someone on their side to take a look and actually acknowledge and understand the problem.

But even without network issues that have plagued it I would hesitate to build anything for production on it because it can't even do transactions and the product manager for D1 openly stated they wont implement them [0]. Your only way to ensure data consistency is to use a Durable Object which comes with its own costs and tradeoffs.

https://github.com/cloudflare/workers-sdk/issues/2733#issuec...

The basic idea of D1 is great. I just don't trust the implementation.

For a hobby project it's a neat product for sure.

[−] ignoramous 28d ago

>

And many of the hung queries wouldn't even show up under traces in their observability dashboard

How did you work around this problem? As in, how do you monitor for hung queries and cancel them?

> D1 reliability has been bad in our experience.

What about reads? We use D1 in prod & our traffic pattern may not be similar to yours (our workload is async queue-driven & so retries last in order of weeks), nor have we really observed D1 erroring out for extended periods or frequently.

[−] eis 28d ago

> How did you work around this problem? As in, how do you monitor for hung queries and cancel them?

You just wrap your DB queries in your own timeout logic. You can then continue your business logic but you can't truly cancel the query because well, the communication layer for it is stuck and you can't kill it via a new connection. Your only choice is to abandon that query. Sometimes we could retry and it would immediately succeed suggesting that the original query probably had something like packetloss that wasn't handled properly by CF. Easy when it's a read but when you have writes then it gets complicated fast and you have to ensure your writes are idempotent. And since they don't support transactions it's even more complex.

Aphyr would have a field day with D1 I'd imagine.

> What about reads? We use D1 in prod & our traffic pattern may not be similar to yours (our workload is async queue-driven & so retries last in order of weeks), nor have we really observed D1 erroring out for extended periods or frequently.

We have reads and writes which most of the time are latency sensitive (direct user feedback). A user interaction can usually involve 3-5 queries and they might need to run in sequence. When queries take 500ms+ the system starts to feel sluggish. When they take 2-3s it's very frustrating. The high latencies happened for both reads and writes, you can do a simple "SELECT 123" and it would hang. You could even reproduce that from the Cloudflare dashboard when it's in this degradated state.

From the comments of others who had similar issues I think it heavily depends on the CF locations or D1 hosts. Most people probably are lucky and don't get one of the faulty D1 servers. But there are a few dozen people who were not so lucky, you can find them complaining on Github, on the CF forum etc. but simply not heard. And you can find these complaints going back years.

This long timeframe without fixes to their network stack (networking is CF's bread and butter!), the refusal to implement transactions, the silence in their forum to cries for help, the absurdly low 10GB limit for databases... it just all adds up. We made the decision to not implement any new product on D1 and just continue using proper databases. It's a shame because workers + a close-by read replica could be absolutely great for latency. Paradoxically it was the opposite outcome.

[−] brikym 28d ago
There is always one thing that bites you because Cloudflare is different. I just built an AI game (sleuththetruth.com) and the primary reason it's so slow to prompt a new board is actually not because of AI latency. It's because CF workers have a limit of 6 connections (including spawned workers). There is no way to gulp down all the wiki images I want all at once. If I had put the backend on Railway I don't think I'd have this issue.
[−] kentonv 28d ago
You can farm out the requests to a bunch of Durable Objects. Each DO will have a separate six-concurrent limit. And you can send unlimited concurrent requests to Durable Objects. (This is not an exploit, this is working as intended. The concurrency limit exists to prevent creating excessive connections from a single machine; farming to DOs means the requests are spread out.)

Also note that as of recently, the concurrent limit applies only up to the point that response headers are received, not during body streaming.

[−] brikym 27d ago
Great tip. I knew about #2 which still doesn't help me but #1 is nowhere in their docs!
[−] vjerancrnjak 28d ago
just keep-alive it with pipelining, depending on the server, 100k+ RPS.
[−] kylehotchkiss 28d ago
* D1, but agreed. I wish Cloudflare would offer a built-in D1-R2 backups system though! (Can be done with custom code in a worker, but wish it was first-party)
[−] Normal_gaussian 28d ago
yeah this really sucks.

No downtime snapshots would be the best but I'd be quite happy with a blocking backup on a set schedule that can be set from the GUI / from the cli / from a config file. Its a huge PITA having to play 'trust me bro' to clients and their admins with custom workers and backups.

I currently stream it D1 dump -> worker(encrypt w/ key wrapping) -> R2 on a schedule, then have a container spin up once a day and create changesets from the dumps. An external tool pulls the dumps and changesets.

[−] whereistejas 24d ago
would https://litestream.io/ be a good solution here?
[−] BoorishBears 28d ago

> For those who don’t use Workers, we’ll be releasing REST API support in the coming weeks, so you can access the full model catalog from any environment.

Cloudflare seems to be building for lock-in and I don't love it. I especially don't understand how you build an OpenRouter and only have bindings for your custom runtime at launch.

[−] switz 28d ago
Workers runtime is open source and permissively licensed fwiw

https://github.com/cloudflare/workerd

[−] eis 28d ago
Yes but that is just a tiny part of the whole CF worker ecosystem. The other services are not open source and so the lock-in is very very real. There are no API compatible alternatives that cover a good chunk of the services. If you build your application around workers and make use of the integrated services and APIs there is no way for you to switch to another provider because well, there is none.
[−] mikeocool 28d ago
Agreed -- except that all of their docs and marketing pitches it for use cases like "per-user, per-tenant or per-entity databases" -- which would be SO great.

But in practice, it's basically impossible to use that way in conjunctions with workers, since you have to bind every database you want to use to the worker and binding a new database requires redeploying the worker.

[−] AgentME 28d ago
If you want to dynamically create sqlite databases, then moving to durable objects which are each backed by an sqlite database seems to be the way to go currently.
[−] eis 28d ago
And now you've put everything on the equivalent of a single NodeJS process running on a tiny VM. Next step: spread out over multiple durable objects but that means implementing a sharding logic. Complexity escalates very fast once you leave toy project territory.
[−] rs_rs_rs_rs_rs 28d ago
Yeah but the 10GB limit for D1 is crazy, can you really start building on that? Other than toy projects?
[−] jillesvangurp 28d ago
Most website content management systems would never get close to that size. If you need a bigger database, D1 is probably the wrong solution to begin with. 10GB can be millions of records depending on your table structure. But if you are gathering some survey data, running a CMS, etc. you probably should be fine with even just a few MB of data; which is probably the sweet spot for D1.
[−] dpark 28d ago
Really depends on what you’re putting in the DB. Cloudflare is clear that these are supposed to be very localized DBs. Per user or tenant.
[−] chrisldgk 27d ago
Per their own docs, D1 is primarily meant for things like Auth DBs that you have frequent read/write access to but that store limited amounts of data. If you need more storage, running Postgres somewhere else and querying via Hyperdrive is probably what you want to do instead.
[−] ncrmro 28d ago
Turso/libsql has been great for poc project so far
[−] james2doyle 28d ago
I find it really confusing that the worker AI models on here: https://developers.cloudflare.com/workers-ai/models/ do not have full overlap with the ones on here: https://developers.cloudflare.com/ai/models/

Yes, you can see the same "hosted" ones on there, but when you look at the models endpoint, there are much less options at the "workers-ai/*" namespace. Is that intentional?

[−] james2doyle 28d ago
To better clarify, I don’t see "workers-ai/@cf/google/gemma-4-26b-a4b-it" in the /models enpoint in gateway.ai.cloudflare.com but it does seem to exist as a hosted model. Same with "workers-ai/@cf/nvidia/nemotron-3-120b-a12b" which I would expect to see
[−] samjs 28d ago
Hey James.

Thanks for the feedback, and good catch. Looks like that endpoint is pulling from a slightly out of date data source. The docs/dashboard currently are the best resources for the full catalog, but we'll update that API to match.

[−] sf_tristanb 28d ago
Sexy, but I wouldn't trust it. Why ? because Cloudflare AI Gateway is reporting inaccurate/wrong price for flagship models such as Nano Banana 2 and Nano Banana pro (I run production app using those). Been reporting it on discord and twitter, and they don't care. Entreprise client here :)
[−] minglu 28d ago
hi, I am the PM for AI Gateway. We want to make sure our pricing is correct. I found your tweets about this and will dig in!
[−] sf_tristanb 25d ago
thank you
[−] wahnfrieden 29d ago
No spending limit / no ability to set a budget, unlike Google or OpenAI. Be prepared for an eye-watering invoice if you have a bug or get hacked.

edit: Why downvote? It's correct, and it's a risk that competitors handle better, including for their CDN products (compared to Bunny CDN). Maybe you are just used to the risk and haven't felt the burn yourself yet. Or you have the mistaken notion that there is no price at which temporary downtime is worthwhile to avoid paying.

[−] rl3 28d ago

>

Be prepared for an eye-watering invoice if you have a bug or get hacked.

Speaking of:

https://news.ycombinator.com/item?id=47787042

I really hope that person gets a resolution from Cloudflare that doesn't financially ruin them.

[−] james2doyle 28d ago
I just added some credits to my account. You can set a daily $ spend limit as well as add credits without auto-refill
[−] throwpoaster 29d ago
Anthropic gonna acquire Cloudflare for stock. Solves their infrastructure problems in one shot.
[−] kylehotchkiss 28d ago
No way! Cloudflare will buy anthropic when the economy begins self-correcting. Looking forward to Workers AI getting all those H100s to run more Qwens
[−] neya 29d ago
I'm not ready to for another rug pull, so please no :( I really enjoy Cloudflare's CDN.
[−] ramesh31 29d ago
Big, could be a viable Bedrock alternative. Probably better uptime than Anthropic or AWS, too.
[−] bm-rf 29d ago
Not seeing any pricing info on the models[1] page. Wonder how much of a lift this is over paying providers directly. Perhaps Cloudflare is doing this at cost? Also interesting that zero data retention is not on by default, and is not supported with all providers[2]. Finally, would be great if this could return OpenAI AND Anthropic style completions.

[1] https://developers.cloudflare.com/ai/models/

[2] https://developers.cloudflare.com/ai-gateway/features/unifie...

[−] samjs 28d ago
Hey! I'm one of the engineers who built this :)

We'll be adding prices to the docs and the model catalog in the dashboard shortly.

In short: currently the pricing matches whatever the provider charges. You can buy unified billing credits [1] which charges a small processing fee.

> Finally, would be great if this could return OpenAI AND Anthropic style completions.

Agreed! This will be coming shortly. Currently we'll match the provider themselves, but we plan to make it possible to specify an API format when using LLMs.

[1]: https://developers.cloudflare.com/ai-gateway/features/unifie...

[−] agentifysh 28d ago
excellent! please make sure to include rate limit details as well.
[−] yoavm 29d ago
[−] bm-rf 29d ago
Thanks, I don't see pricing for foundation models however, such as GPT-5.4
[−] datadrivenangel 28d ago
Good to see their purchase of Replicate paying off!
[−] erans 27d ago
It's great to see more such platform popping up. It's good for the ecosystem. We need more hosting options that are clear, secure and have the ability to help people run as many models as possible.
[−] strimoza 28d ago
Interesting timing — I've been using Bunny CDN for video delivery and considering moving parts to Cloudflare. Anyone have experience comparing the two for media streaming specifically?
[−] hemangjoshi37a 28d ago
The interesting question isn't "can CF run agent inference" — it's what the routing layer needs to look like for multi-turn workflows. Shipping agent systems to enterprise clients the last year, the bottleneck is never raw tokens/sec. It's (a) state checkpointing betweentool calls, (b) cold-start latency on embedding/rerank models, (c) rate-limit coordination across concurrent agent loops. Does CF expose per-session state, or still stateless-per-request? Without that, you end up building the interesting part yourself.
[−] Invictus0 28d ago
I've been using AI gateway for months already, is this any different or is it just moving out of beta?
[−] VikRubenfeld 28d ago
Is there something free like Codes or AntiGravity that can run open-source LLM models?
[−] messh 28d ago
So, is this similar to openrouter?
[−] pprotas 29d ago
Can't wait for the free tier!
[−] kol3x 28d ago
Would be nice to filter out "proxied" models in the Workers AI page.
[−] TheServitor 28d ago
That's so brilliant that it's already a thing called openrouter!
[−] Jack5500 29d ago
Sadly no mention on regions.
[−] 6thbit 29d ago
don’t attach to a single AI provider when you can attach to cloudflare as your single AI gateway provider!

rant aside, they are greatly positioned network wise to offer this service, i wonder about their princing and potential markup on top of token usage?

i presume they wont let you “manage all your AI spend in one place” for free.

[−] kinnth 28d ago
openrouter works perfectly well for me called by cloudflare workers. open router also has superior cascading and waterfalling if models are offline. Not sure they have that working from V1.

I love everything about openrouter. So kinda a fan boy.

[−] mbtrucks 29d ago
Can I set a hard cost limit ? Else I'm not interested, don't be like googles mess of billing.
[−] ernsheong 29d ago
What is Cloudflare trying to be? Everything everywhere all at once?
[−] reconnecting 28d ago
Unified inference layer is a polite way to say: "proxy that knows every prompt and every response".
[−] mbtrucks 29d ago
Can I set a hard cost limit per day ? With no drift, else I'm not interested.
[−] stult 29d ago
A few weeks ago, I ran into a bug with Cloudflare's DNS server not detecting when I updated the records with the registrar. The bug was 100% on their end, entirely unsolvable by me, yet they have made it literally impossible to contact them to file a bug report. Their standard user help workflow dead-ended by forcing me to talk to their absolutely useless AI help chatbot, which proceeded to regurgitate their FAQ (inaccurately, uselessly), then referred me to a phone number that was disconnected/not in service, then gave me an email address that auto-replied it was no longer in use, then just looped back to the FAQ. There was no way for me to even send them an email to let them know they have a major bug.

I immediately pulled all my sites off of Cloudflare and I will never use that godawful nightmare of a company for anything ever again. If they can't even host a generic help bot without screwing it up that badly, why would I ever use them for anything at all, never mind an AI platform?

[−] RITESH1985 28d ago
[flagged]
[−] ZihangZ 28d ago
[dead]
[−] kantaro 28d ago
[flagged]
[−] redoh 28d ago
[flagged]