From zero to a RAG system: successes and failures (en.andros.dev)

by andros 103 comments 322 points
Read article View on HN

103 comments

[−] brianykim 50d ago
Good company-ready RAG benefits a lot from some basic pre-processing/labeling of the data instead of solely dumping unstrucuted data into a vector database and calling it a day. Different heuristics and different schemas of embedded data go a long way in ensuring quality and flexibility of querying.

Then you can do ReAG, which let's you reason on top of the top K intelligently.

And things like memory knowledge graph services as well, can help reduce your search space, and provide extra context over time that gets updated, beyond just treating static docs as sources of truth. You can give it more context as to how it should interpret older docs, vs. newer docs, and allowing users (based on correctness or not) to help audit the what is embedded in your RAG systems.

I appreciate the thorough write up, but doing RAG systems seriously requires much more than just embeddings and a basic chromadb set up.

Happy to share any thoughts here or on a call if anyone wants to chat.

[−] leflob 50d ago
I agree, I attempted a similar project a year ago and the retrival part is so critical. To work half decent you need some serious strategy for metadata, chunking, etc. E.g. how do you deal with tim series data? Like i am not looking for any quarterly numbers but the ones from Q2 2025, Or the research report from 4 weeks ago... And how do you deal with images. We had heaps of companiy knowledge in pptx which you can convert to text but what about pictures in the presentations. Our analyst presentations sometimes consist mostly of charts and visuals, how are they embedded? Also imo for 90% of the time companies dont need a RAG system but a good search / retrival system.
[−] minikomi 50d ago
Yep. Semantically distinct and meaningful chunks wins every time over any kind of windowing or slice and dicing.

Unfortunately, many people are looking for a fire and forget solution over an existing rats nest of documentation debt..

[−] gverrilla 50d ago

> Happy to share any thoughts here

Please do.

[−] maxperience 51d ago
This article is interesting cause of its scale, but does not touch on how to properly use RAG best practices. We wrote up this blog post on how to actually build a smart enterprise AI RAG based on the latest research if it's interesting to anyone: https://bytevagabond.com/post/how-to-build-enterprise-ai-rag...

It's based on different chunking strategies that scale cheaply and advanced retrieval

[−] mettamage 50d ago
[dead]
[−] JKCalhoun 51d ago
And some have been saying that RAGs are obsolete—that the context window of a modern LLM is adequate (preferable?). The example I recently read was that the contexts are large enough for the entire "The Lord of the Rings" books.

That may be, but then there's an entire law library, the entirety of Wikipedia (and the example in this article of 451 GB). Surely those are at least an order of magnitude larger than Tolkien's prose and might still benefit from a RAG.

[−] menaerus 51d ago
The success of the model responding to you with a correct information is a function of giving it proper context too.

That hasn't changed nor I think it will, even with the models having very large context windows (eg Gemini has 2M). It is observed that having a large context alone is not enough and that it is better to give the model sufficiently enough and quality information rather than filling it with virtually everything. Latter is also impossible and does not scale well with long and complicated tasks where reaching the context limit is inevitable. In that case you need to have the RAG which will be smart enough to extract the sufficient information from previous answers/context, and make it part of the new context, which in turn will make it possible for the model to keep its performance at satisfactory level.

[−] alansaber 51d ago
RAG is nowhere near obselete. Model performance on enormous sequences degrades hugely as they are not well represented in training and non quadratic attention approximations are not amazing
[−] Nihilartikel 51d ago
I'm not super deep on LLM development, but with ram being a material bottleneck and from what I've read about DeepSeek's results with offloading factual knowledge with 'engrams' I think that the near future will start moving towards the dense core of LLMs focusing much more on a distillation of universal reasoning and logic while factual knowledge is pushed out into slower storage. IIRC Nvidia's Nemotron Cascade is taking MoE even further in that direction too.

I don't need a coding model to be able to give me an analysis of the declaration of independence in urdu from 'memory' and the price in ram for being able to do that, impressive as it is, is an inefficiency.

[−] axus 51d ago
Were he still corporeal, L. Ron would be all over this AI stuff.
[−] dgb23 51d ago
Also the thing with context is that you want to keep it focused on the task at hand.

For example there's evidence that typical use of AGENTS.md actually doesn't improve outcomes but just slows the LLMs down and confuses them.

In my personal testing and exploration I found that small (local) LLMs perform drastically better, both in accuracy and speed, with heavily pruned and focused context.

Just because you can fill in more context, doesn't mean that you should.

The worry I have is that common usage will lead to LLMs being trained and fined tuned in order to accommodate ways of using them that doesn't make a lot of sense (stuffing context, wasting tokens etc.), just because that's how most people use them.

[−] btown 51d ago
I do think that what we think of as RAG will change!

When any given document can fit into context, and when we can generate highly mission-specific summarization and retrieval engines (for which large amounts of production data can be held in context as they are being implemented)... is the way we index and retrieve still going to be based on naive chunking, and off-the-shelf embedding models?

For instance, a system that reads every article and continuously updates a list of potential keywords with each document and the code assumptions that led to those documents being generated, then re-runs and tags each article with those keywords and weights, and does the same to explode a query into relevant keywords with weights... this is still RAG, but arguably a version where dimensionality is closer tied to your data.

(Such a system, for instance, might directly intuit the difference in vector space between "pet-friendly" and "pets considered," or between legal procedures that are treated differently in different jurisdictions. Naive RAG can throw dimensions at this, and your large-context post-processing may just be able to read all the candidates for relevance... but is this optimal?)

I'm very curious whether benchmarks have been done on this kind of approach.

[−] whakim 51d ago
For technical domains, stuffing the context full of related-and-irrelevant or possibly-conflicting information will lead to poor results. The examples of long-context retrieval like finding a fact in a book really aren't representative of the types of context you'd be working with in a RAG scenario. In a lot of cases the problem is information organization, not retrieval, e.g. "What is the most authoritative type of source for this information?" or "How do these 100 documents about X relate to each other?"
[−] joefourier 51d ago
Some previous techniques for RAG, like directly using a user message’s embedding to do a vector search and stuffing the results in the prompt, are probably obsolete. Newer models work much better if you use tool calls and let them write their own search queries (on an internal database, and perhaps with multiple rounds), and some people consider that “agentic AI” as opposed to RAG. It’s still augmenting generation with retrieved information, just in a more sophisticated way.
[−] esafak 51d ago
How can it be obsolete? Maybe if you only have toy data you picked to write your blog post. Companies have gigabytes, petabytes of data to draw from.
[−] magospietato 50d ago
It's not that the context window is adequate, but rather an agentic LLM can search the source of truth using appropriate tools (SQL, term search, etc.)

RAG made sense when the semantic search was based on human input and happening as a workflow step before populating context. Now it happens inside the agentic loop and the LLM already implicitly has the semantics of the user input.

[−] jgalt212 51d ago

> some have been saying that RAGs are obsolete

I suspect the people saying that have not been transparent with their incentives.

[−] gopalv 51d ago

> Surely those are at least an order of magnitude larger than Tolkien's prose and might still benefit from a RAG.

At some point, this is a distributed system of agents.

Once you go from 1 to 3 agents (1 router and two memory agents), it slowly ends up becoming a performance and cost decision rather than a recall problem.

[−] _the_inflator 51d ago
I have two surprises for you:

1. Don't believe the pundits of RAG. They never implemented one.

I did many times, and boy, are they hard and have so many options that decide between utterly crappy results or fantastic scores on the accuracy scale with a perfect 100% scoring on facts.

In short: RAG is how you fill the context window. But then what?

2. How does a superlarge context window solve your problem? Context windows ain't the problem, accurate matching requirements is. What do your inquiry expect to solve? Greatest context window ever, but what then? No prompt engineering is coming to save you if you don't know what you want.

RAG is in very simple terms simply a search engine. Context window was never the problem. Never. Filling the context window, finding the relevant information is one problem, but also only part of the solution.

What if your inquiry needs a combination of multiple sources to make sense? There is no 1:1 matching of information, never.

"How many cars from 1980 to 1985 and 1990 to 1997 had between 100 and 180PS without Diesel in the color blue that were approved for USA and Germany from Mercedes but only the E unit?"

Have fun, this is a simple request.

[−] mentos 51d ago
I assume it’s not possible to get the same results by fine tuning a model with the documents instead?
[−] pussyjuice 50d ago

> The example I recently read was that the contexts are large enough for the entire "The Lord of the Rings" books.

Not really, though. Not in practice at least, e.g. code writing.

Paste a 200 line React component into your favorite LLM, ask it to fix/add/change something and it will do it perfectly.

Paste a 2000 line one though, and it starts omitting, starts making mistakes, assumptions, re-writing what it already has, and so-on.

So what's going on? It's supposed to be able to hold 1000s of lines in context, but in practice it's only like 200.

What happens is the accuracy and agency drops significantly as you need to pan larger and larger context windows.

And it's not that it's most accurate when the window is smallest either - but there is a sweet spot.

Outside that sweet spot, you will get "unacceptable responses" - slop you can't use.

That's what happens when you paste the 2000 line React component for example. You get a response you can't quite use. Yet the 200 line one is typically perfect.

What would make the 2000 line one usually perfect every time?

We need a way to increase that "accurate window size" lets call it "working memory", so that we can generate more code, more writing, more pixels at acceptable levels of quality. You'd also have enough language space for agents to operate and collaborate sans the amnesia they have today.

RAG is basically the interim workaround for all this. Because you can put everything in a vector DB and search/find what you need in the context when you need it.

So, RAG is a great solution for today's problems: Say you have a bunch of Python code files written in a certain style and the main use case of your LLM is writing Python code in specified ways, with this setup you can probably deliver "better Python code" than your competitor because of RAG - because you have this deterministic supplement to your LLMs outputs to basically do research and augment the output in predetermined ways every time it responds to a prompt.

But eventually, if I don't have to upload "The Lord of the Rings" documents, and vector search to find different areas in order to generate responses, if I can just paste the entire txt into the input, it can generate the answer considering "all of it" not just that little area, it would presumably be a better quality response.

[−] charcircuit 50d ago
It's nonsense as all frontier models are integrated with retrieval engines hooked up to various search engines / their own.
[−] hrmtst93837 50d ago
[flagged]
[−] shepherdjerred 50d ago
Is there a 'sqlite equivalent' for RAG? e.g. something I could give Claude w/o a backend and say use command X to add a document, command Y to search, all in a flat file?
[−] mettamage 51d ago
51 visitors in real-time.

I love those site features!

In a submission of a few days ago there was something similar.

I love it when a website gives a hint to the old web :)

[−] abd7894 51d ago
What ended up being the main bottleneck in your pipeline—embedding throughput, cost, or something else? Did you explore parallelizing vectorization (e.g., multiple workers) or did that not help much in practice?
[−] whakim 51d ago
I'd argue the author missed a trick here by using a fancy embedding model without any re-ranking. One of the benefits of a re-ranker (or even a series of re-rankers!) is that you can embed your documents using a really small and cheap model (this also often means smaller embeddings).
[−] trgn 51d ago
Odd to me that Elasticsearch isn't finding a second breath in these new ecosystems. It basically is that now, a RAG engine with model integration.
[−] dprkh 50d ago
Why did you opt for semantic search, and not plain old full text search? I built an "AI Agent for a Commerce Website" as a take-home exercise yesterday, and I chose to simply give the model a tool that does a full text search over products, powered by MiniSearch, and I think it works reasonably well. I believe this is also what Claude Code does.

https://github.com/dprkh/fufus/

[−] pussyjuice 50d ago
After a couple years of multi-modal LLM proving out product, I now consider RAG to be essentially "AI Lite", or just AI-inspired vector search.

It isn't really "AI" in the way ongoing LLM conversations are. The context is effectively controlled by deterministic information, and as LLMs continue improve through various context-related techniques like re-prompting, running multiple models, etc. that deterministic "re-basing" of context will stifle the output.

So I say over time it will be treated as less and less "AI" and more "AI adjacent".

The significance is that right now RAG is largely considered to be an "AI pipeline strategy" in its own right compared others that involve pure context engineering.

But when the context size of LLMs grows much larger (with integrity), when it can, say, accurately hold thousands and thousands of lines of code in context with accuracy, without having to use RAG to search and find, it will be doing a lot more for us. We will get the agentic automation they are promising and not delivering (due to this current limitation).

[−] KPGv2 51d ago
This article came just in the nick of time. I'm in fandoms that lean heavily into fanfiction, and there's a LOT out there on Ao3. Ao3 has the worst search (and yo can't even search your account's history!), so I've been wanting to create something like this as a tool for the fandom, where we can query "what was the fic about XYZ where ABC happened?" and get hopefully helpful responses. I'm very tired of not being able to do this, and it would be a fun learning experience.

I've already got the data mostly structured because I did some research on the fandom last year, charting trends and such, so I don't even need to massage the data. I've got authors, dates, chapters, reader comments, and full text already in a local SQLite db.

[−] overtaxed 50d ago
Reading this blog post scared me a bit. The use case I proposed was building a "simple" RAG chatbot for some (~50 confluence docs and somewhat growing) on elasticsearch and another process that my team handles. I was just planning on using a stack like streamlit, text-embedding-3-small,FAISS for the vector store and it to be driven by a python script.

Didn't seem too expensive or too hard based on the handful of queries my team would be using it for, and it was a "low hanging fruit" pain point for my team that I thought could be improved by a RAG chatbot. That on top of the fact that Atlassian Rovo did not do a good job of not going to external sources when we had the answer in our existing internal docs.

Am I still on the right path?

[−] civeng 51d ago
Great write-up. Thank you! I’m contemplating a similar RAG architecture for my engineering firm, but we’re dealing with roughly 20x the data volume (estimating around 9TB of project files, specs, and PDFs). I've been reading about Google's new STATIC framework (sparse matrix constrained decoding) and am really curious about the shift toward generative retrieval for massive speedups well beyond this approach. For those who have scaled RAG into the multi-terabyte range: is it actually worth exploring generative retrieval approaches like STATIC to bypass standard dense vector search, or is a traditional sharded vector DB (Milvus, Pinecone, etc.) still the most practical path at this scale?

I would guess the ingestion pain is still the same.

This new world is astounding.

[−] Horatius77 53d ago
Great writeup but ... pretty sure ChromaDB is open source and not "Google's database"?
[−] sota_pop 50d ago
Nice writeup. I’m curious why you went with chromadb and not pgvector. I haven’t built a rag system myself, but I’ve always understood the initial doc parsing to be a major challenge alone, so kudos there!

Additionally, I also thought it was customary to store a pointer to the source in the same row as the vector (i.e. vector+ doc path + page#/paragraph/etc.) OR just store the original text chunk (though based on your disk reqs doesn’t sound like it would have been feasible).

Glad you’re having good results! Maybe you’ve inspired me to finally try out a similar setup myself!

[−] smrtinsert 51d ago
What would it look like to regularly react to source data changes? Seems like a big missing piece. Event based? regular cadence? Curious what people choose. Great post though.
[−] lucfranken 51d ago
Cool work! Would be so interested in what would happen if you would put the data and you plan / features you wanted in a Claude Code instance and let it go. You did carefully thinking, but those models now also go really far and deep. Would be really interested in seeing what it comes up with. For that kind of data getting something like a Mac mini or whatever (no not with OpenClaw) would be damn interesting to see how fast and far you can go.
[−] ozim 49d ago
So 95% of the post is „regular software engineering” like, yes you cannot just process 1TB of data, you need to split it up then even if you split it up you might have limited budget for processing so think how you fit in that, make checkpoints and make sure you have logs.

Not dismissing the value of the blog post. Just underlining for „non engineers”.

[−] alansaber 51d ago
Think that's the first time i've seen someone write about checkpointing, definitely worth doing for similar projects.
[−] aledevv 51d ago
I made something similar in my project. My more difficult task has been choice the right approach to chunking long documents. I used both structural and semantic chunking approach. The semantic one helped to better store vectors in vectorial DB. I used QDrant and openAi embedding model.
[−] supermooka 51d ago
Thanks for an interesting read! Are you monitoring usage, and what kind of user feedback have you received? Always curious if these projects end up used because, even with the perfect tech, if the data is low quality, nobody is going to bother
[−] throw831 50d ago
Can anyone suggest a RAG pipeline that is production ready?

Also I wonder if it's now better to use Claude Agent SDK instead of RAG. If anyone has tried this, I would be interested in hearing more.

[−] fb03 50d ago
Quick Q: OP told he used Llama 3.2:3b which is a pretty old model. What would be a good modern model to substitute it? Qwen3.5:4b or something?
[−] redwood 51d ago
Cool to see Nomic embeddings mentioned. Though surpriser you didn't land on Voyage.

Did you look at Turbopuffer btw?

[−] brcmthrowaway 51d ago
What was the system prompt?
[−] gkanellopoulos 41d ago
[dead]
[−] maxothex 51d ago
[dead]
[−] philbitt 51d ago
[dead]
[−] skillflow_ai 51d ago
[dead]
[−] Yanko_11 51d ago
[dead]
[−] leontloveless 51d ago
[dead]
[−] felixagentai 50d ago
[flagged]
[−] BrianFHearn 51d ago
[dead]
[−] aplomb1026 50d ago
[dead]
[−] chattermate 51d ago
[dead]