Launch HN: Sitefire (YC W26) – Automating actions to improve AI visibility

by vincko 27 comments 36 points
Read article View on HN

27 comments

[−] XCSme 53d ago
Lite >For small brands wanting to get started with monitoring and content. >$249/month

Is $249/month something most small brands/shops can afford? Many have a few $ks in total revenue.

[−] marzapower 46d ago
[dead]
[−] pdyc 56d ago
do you use same accounts? how do you make sure that chatgpt/gemini etc. dont personalize the queries when used with same account?Also responses change based on location and ip(residetial ip's are treated differently)
[−] marzapower 46d ago
This is actually a fundamental limitation of prompt-monitoring approaches — personalization, location variance, account history all introduce noise that's hard to control.

One alternative is page-level structural analysis: instead of asking ChatGPT "do you cite this site?", you analyze the page directly for the signals that predict citation — source density, answer structure, fluency, statistics. No account needed, no IP variance, fully reproducible.

That's the approach I took with writeseo.vercel.app/geo-check — based on the Princeton KDD research (same paper vincko linked above). Different layer of the problem, but more stable as a diagnostic.

[−] arunakt 56d ago
Awesome, How is this different from GEO
[−] vincko 56d ago
It's not different from GEO. The actions we take all play into GEO.
[−] onecommit 57d ago
How do models deal with assessing the quality of content and its accuracy/veracity when recommending products currently? What do the providers do to avoid a situation where more content === more traffic? Would love to see links to relevant research on this, if you have them. much success to you, appreciate your ai slop risk awareness.
[−] vincko 57d ago
There is the preselection, which depends on the fanout queries the model comes up with and the contents performance across those queries on the search index.

After that content is actually assessed by the model. This paper tried different strategies to improve performance for this last step: https://arxiv.org/pdf/2311.09735. Adding statistics, sources, original data are all strategies that we apply.

In classic SEO, creating more and more content leads to "cannibalization". Generally this hurts performance of all overlapping content so much that it is not worth it.

[−] onecommit 56d ago
interesting - thanks!
[−] Gobhanu 57d ago
how do you track where users are coming from?
[−] vincko 57d ago
We currently simply integrate with your Google Analytics and filter by Source. This tends to be a lower bound, since it's not always set correctly. Coming from some of the native apps, users might be categorized as direct visitors.

There are other data sources we want to enable in the future like Cloudflare.

[−] yunyu 57d ago
What do you guys do differently than Profound or Airops?
[−] vincko 57d ago
That's a super valid question, we get it a lot. There are a lot of overlaps.

In our view Profound and Airops are aimed at existing marketing teams. Our goal is to be more hands-off, so you don't need a team. With many of our clients we act more like an agency, communicating via Slack and automating step by step. That's the experience we want to create. We aren't there yet though.

[−] debarshri 57d ago
Add peec to that list.
[−] ceejayoz 57d ago
Ugh. The worst of SEO, but a bunch more of it? Noooooo.
[−] a13n 57d ago
Please don't override the browser's default scroll behavior. It's so jarring and basically never a good idea.
[−] WWilliam 55d ago
[flagged]
[−] abitabovebytes 57d ago
[dead]
[−] CloakHQ 57d ago
[dead]
[−] Remi_Etien 57d ago
[flagged]
[−] yolosollo 56d ago
[flagged]
[−] vahar 57d ago
Regarding the topic of ambient agents, what’s the impact of your product? It’s hard for me to imagine the impact but I guess it must be a necessity if we have ambient agents to get discovered at all right? Nice to see a player from Europe on the market too!
[−] DragDropDeploy 57d ago
[flagged]
[−] dmani 57d ago
[flagged]
[−] clawbridge 56d ago
[flagged]