Ensu – Ente’s Local LLM app (ente.com)

by matthiaswh 178 comments 361 points
Read article View on HN

178 comments

[−] VladVladikoff 52d ago
Maybe I’m missing it but the page is really light on technical information. Is this a quantized / distilled model of a larger LLM? Which one? How many parameters? What quantization? What T/s can I expect? What are the VRAM requirements? Etc etc
[−] FusionX 52d ago
Given how the blog is presented, I assumed this was something novel that solved a unique problem, maybe a local multi-modal assistant for your device.

I installed it and it's none of that. It is a mere wrapper around small local LLM models. And, it's not even multi-modal! Anyone could've one-shotted this in Claude in an hour (I'm not exaggerating).

What's the target audience here? Your average person doesn't care about the privacy value proposition (at least not by severely sacrificing chat model's quality). And users who do want that control can already install LMStudio/Llama.cpp (which is dead simple to setup).

The actual release product should've been what's described in "What's next" section.

> Instead of general chat, we shape Ensu to have a more specialized interface, say like a single, never-ending note you keep writing on, while the LLM offers suggestions, critiques, reminders, context, alternatives, viewpoints, quotes. A second brain, if you will.

> A more utilitarian take, say like an Android Launcher, where the LLM is an implementation detail behind an existing interaction that people are already used to.

> Your agent, running on your phone. No setup, no management, no manual backups. An LLM that grows with you, remembers you, your choices, manages your tasks, and has long-term memory and personality.

[−] jubilanti 52d ago
There's dozens of local inference apps that basically wrap llama.cpp and someone else's GGUFs. The decentralized sync history part seems new? Not much else. But the advertisement copy is so insufferably annoying in how it presents this wrapper as a product.

Have a comparison chart to Ollama, LMStudio, LocalAI, Exo, Jan.AI, GPT4ALL, PocketPal, etc.

[−] xtracto 52d ago
I would love to see a "distributed LLM" system, where people can easily setup a system to perform a "piece" of a "mega model" inference or training. Kind of like SETI@home but for an open LLM (like https://github.com/evilsocket/cake but massive )

Ideally if you "participate" in the network, you would get "credits" to use it proportionally to how much GPU power you have provided to the network. Or if you can't, then buy credits (payment would be distributed as credits to other participants).

That way we could build huge LLMs that area really open and are not owned by any network.

I would LOVE to participate in building that as well.

[−] moqster 52d ago
Heard the first time about them (ente) yesterday in a discussion about "which 2FA are u using?". Directly switched to https://ente.com/auth/ on Android and Linux Desktop and very happy with it.

Going to give this a try...

[−] buster 51d ago
I don't understand all the hate about ente, to be honest. Ente seems to try to solve the big tech lock in with their apps. Personally, i'm a very happy Ente Photos user, so what's the problem with Ensu? It's available on desktops and mobile, it's an app trying to give all a little bit more privacy and freedom and yet most comments are just hating on it. If you can vibe code Ensu in a week end, please do. Make a better clone if you want to, but don't hate on someone for their work for stupid reasons.
[−] RandomGerm4n 52d ago
I like the idea of having a user-friendly app that lets you use LLMs locally. Tools like Ollama and LMStudio tend to put most people off because you have to decide for yourself which models to use and there are so many settings to configure. If the hardware you’re using is compatible, Ensu could be a drop-in replacement for casual ChatGPT users.

However, it’s a bit confusing because, for example, a larger LLM model was downloaded to my smartphone than to my computer. It would probably make the most sense if the app simply categorized devices into five different tiers and then, depending on which performance tier a device falls into, downloaded the appropriate model and simply informed the user of the performance tier. Over time, it would then be possible to periodically replace the LLM for each tier with better ones, or to redefine the device performance tiers based on hardware advancements.

[−] jasongill 52d ago
I love Ente Auth, but Ente (as a company/organization) does a somewhat poor job of calling out their non-photos apps in their branding and on their website. If you go to the "Download" button at the top of the page on this page about their LLM chat app, it downloads... their photo sharing application. If you click Sign Up, it takes you to a signup page with the browser title "Ente Photos" but the page text says "Private backups for your memories" with a picture of a lock - is that the Ente Auth signup, or the Ente Photos app signup?

A little bit of cleanup on their site to break out "Ente, our original photo sharing app" from the rest of their apps would do wonders, because I had to search around on the announcement to find the download for this app, which feels about like trying to find the popular Ente Auth app on their website

[−] koehr 52d ago
I just tried it. It downloaded Qwen3.5 2B on my phone and it's pretty coherent in its sentences, but really annoying with the amount of Ente products mentioned in every occasion. Other than that it's fast enough to talk to and definitely an easy way to run a model locally on your phone.
[−] netfl0 52d ago
Weird hype going on here in comments.
[−] cdrnsf 52d ago
I like Ente, but isn't their core product a photos application? Its offshoots like this and 2FA feel incongruous.
[−] lone-cloud 52d ago
Any half capable engineer can vibe code this in a week. Who cares?
[−] sneak 51d ago
Meanwhile, my Ente photos app crashes 20 times a day on iOS when using advanced functionality such as scrolling through my photos.

They also have a TOTP auth app?

If their photos app stopped crashing and they pursued basic feature parity between their iOS and desktop apps (IMO table stakes for a photo sync service) I'd have no issue recommending them. Instead, it seems like every so often they just branch off into a new direction, leaving the existing products unfinished. It's like Mozilla-level lack of focus.

[−] franze 52d ago
if you are into local LLMs check out apfel

https://github.com/Arthur-Ficial/apfel

Apple Ai on the command line

[−] maxloh 52d ago
There is also another app called Off Grid, which lets you run any model from Hugging Face (of course you need to choose one your phone can handle).

https://github.com/alichherawalla/off-grid-mobile-ai

[−] emehex 52d ago
There are literally 1000s of these types of apps. Why is this on the Front Page?
[−] codethief 51d ago
I've been a big fan of Ente and their work and am a paying customer but, man, this comment in a long-standing GitHub feature request is ringing truer every day:

> Ente is becoming like Proton: too many products and a lack of focus, leading to lower quality and not delivering what customers want

https://github.com/ente-io/ente/discussions/552#discussionco...

[−] getpokedagain 52d ago
As someone who saw this and was interested but also skeptical of this being low effort are there other open projects for running small models locally on android / iOS?

I've found https://github.com/alichherawalla/off-grid-mobile-ai but haven't tried anything in this space yet.

[−] talking_penguin 52d ago
How is this any different from Ollama plus Open Web UI?
[−] selfawareMammal 52d ago

> People called us crazy.

Absolutely no one called them crazy.

[−] dgb23 52d ago
The (hn) title is misleading (unlike the actual title): It's an LLM _App_ not an LLM.
[−] nathan_compton 52d ago
Please god stop letting LLMs write your copy. My brain just slides right over this slop. Perhaps you have a useful product but christ almighty I cannot countenance this boring machine generated text.
[−] qprofyeh 51d ago
For the ones who tried this out but want to uninstall and remove the downloaded model, because it's an iPad app it's located here: /Users/username/Library/Containers/Ensu
[−] mkagenius 52d ago
Had used cactus before - https://news.ycombinator.com/item?id=44524544

Then moved to pocket pal now for local llm.

[−] treexs 51d ago
?

hundreds of local llm apps exist

the what's next section acts novel but all of them have been achieved or created in some format already: you can run a local llm on a phone and connect it to a agent

[−] FitchApps 52d ago
Have you tried WebLLM? Or this wrapper: CodexLocal.com Basically, you would have a rather simple but capable LLM right in your browser using WebLLM and GPU
[−] fouc 51d ago
this is a weird HN thread, there's so many accounts spontaneously posting in here, both fairly new accounts and even aged accounts, with a mix of low karma and high-ish karma.. I spotted at least 5 suspicious accounts. There's 3 or 4 of them in https://news.ycombinator.com/item?id=47517096 alone..
[−] omdv 51d ago
The content itself, the unexpectedly high rating and weird vibe in the comments smells like bot's manipulation. I hope mods take a look.
[−] imadch 52d ago
What do you mean by IA in your device ? is it a local LLM ? if yeas how much params 4B or 8B...?? device requirements not mentionned too
[−] daikon899 52d ago
The "What's next" section is more interesting than what shipped. A general-purpose chat wrapper around a 1-4B model occupies a crowded space — PocketPal, Jan, LMStudio, GPT4All all do similar things. But the ideas they gesture at (a persistent "second brain" note, an LLM-backed launcher, long-term memory that grows with you) are actually differentiated
[−] vvilliamperez 52d ago
I just use open claw as a local memory management system. Not sure from TFA what's new here.
[−] razvan_maftei 52d ago
Looks like something spun up by Claude Code without thorough testing or design behind it sadly.
[−] pulkitsh1234 52d ago
I am surprised to see this on HN front page, there is no new information here, just an ad.
[−] socalgal2 52d ago
What's special about Ente?

How does it compare to Jan AI for example? or LM Studio? or ????

[−] sbassi 52d ago
For local LLM there are Ollama and LM Studio. How is this different?
[−] todotask2 51d ago
Model is last trained on Dec 2023, that's consider outdated.
[−] gverrilla 51d ago
We hate big tech, because we want to become one ourselves!
[−] tim-projects 52d ago
This app isn't very useful but it did get me thinking.

I have a phone in a drawer I could install termux and ollama on over tailscale and then I'd have an always on llm for super light tasks.

I do really long for a private chat bot but I simply don't have access to the hardware required. Sadly I think it's going to be years to get there..

[−] BaudouinVH 52d ago
Installed it on a not-so-young laptop. It crashes immediately after launch. I blame the laptop.

If Ente is reading this : please add requirements to make it run (how many RAM, etc.)

[−] tmanderson 52d ago
beware of data rackets
[−] post-it 52d ago

> This is not the beginning, nor is this the end. This is just a checkpoint.

Come onnnnnn. I would rather read a one line "Check out our offline llm" rather than a whole press release of slop.

This looks very neat. I'm not familiar with the nitty gritty of AI so I really don't understand how it can reply so quickly running on an iPhone 16. But I'm not even going to bother searching for details because I don't want to read slop.

[−] glitchc 52d ago
This sounds like an ad.
[−] pugchat 52d ago
[dead]
[−] eddie-wang 51d ago
[dead]
[−] aimemobe 49d ago
[flagged]
[−] JulianPembroke 51d ago
[dead]
[−] aplomb1026 51d ago
[dead]
[−] Pythius 52d ago
[flagged]
[−] prism56 51d ago
[dead]