Show HN: QVAC SDK, a universal JavaScript SDK for building local AI applications

by qvac 16 comments 30 points
Read article View on HN

16 comments

[−] shaz0x 30d ago
Went through the SDK docs before asking. On RN/Expo specifically, does Fabric run inside a Bare worklet with IPC back to Hermes, or drop into a native module the way llama.rn does via JNI and llama.cpp? Perf and memory footprint would look very different between the two, curious which path you landed on.
[−] elchiapp 29d ago
Bare worklet with IPC, exactly. Let me know if there's anything I can help with.
[−] WillAdams 35d ago
Do you really mean/want to say:

>...and without permission on any device.

I would be much more interested in a tool which only allows AI to run within the boundaries which I choose and only when I grant my permission.

[−] elchiapp 35d ago
That line means that you don't need to create an account and get an API key from a provider (i.e. "asking for permission") to run inference. The main advantage is precisely that local AI runs on your terms, including how data is handled, and provably so, unlike cloud APIs where there's still an element of trust with the operator.

(Disclaimer: I work on QVAC)

[−] WillAdams 35d ago
OIC.

Should it be re-worded so as to make that unambiguous?

[−] sull 35d ago
thoughts on mesh-llm?
[−] mafintosh 35d ago
The modular philosophy of the full stack is to give you the building blocks for exactly this also :)
[−] WillAdams 35d ago
Looking through the balance of the material, I can see that, but on first glance, this seems a confusible point.
[−] angarrido 34d ago
Local inference is getting solved pretty quickly.

What still seems unsolved is how to safely use it on real private systems (large codebases, internal tools, etc) where you can’t risk leaking context even accidentally.

In our experience that constraint changes the problem much more than the choice of runtime or SDK.

[−] elchiapp 32d ago
Curious to hear what constraints are there that aren't tackled by the current offering of local runtimes/SDKs for inference.
[−] angarrido 32d ago
[dead]
[−] moffers 35d ago
This is all very ambitious. I am not exactly sure where someone is supposed to start. With the connections to Pear and Tether I can see where the lines meet, but is the idea that someone takes this and builds…Skynet? AI Cryptocurrency schemes? Just a local LLM chat?
[−] elchiapp 35d ago
You can build anything! Check out our tutorials here: https://docs.qvac.tether.io/sdk/tutorials/

Although an LLM chat is the starting point for many, there are many other use cases. We had people build domotics systems to control their house using natural language, vision based assistants for surveillance (e.g. send a notification describing what's happening instead of a classic "Movement detected") etc. and everything remains on your device / in your network.

[−] elchiapp 35d ago
Hey folks, I'm part of the QVAC team. Happy to answer any questions!
[−] knocte 34d ago
Are there incentives for nodes to join the swarm (become a seeder)? If yes, how exactly, do they get paid in a decentralized way? Any URL where to get info about this?
[−] mafintosh 34d ago
its through the holepunch stack (i am the original creator). Incentives for sharing is through social incentives like in BitTorrent. If i use a model with my friends and family i can help rehost to them
[−] yuranich 34d ago
Hackathon when?
[−] plur9 34d ago
[dead]
[−] eddie-wang 35d ago
[dead]
[−] tuxnotfound 35d ago
[dead]
[−] tuxnotfound 35d ago
[dead]