I've done this kind of thing many times with codex and sqlite, and it works very well. It's one prompt that looks something like this:
- inspect and understand the downloaded data in directory /path/..., then come up with an sqlite data model for doing detailed analytics and ingest everything into an sqlite db in data.sqlite, and document the model in model.md.
Then you can query the database adhoc pretty easily with codex prompts (and also generate PDF graphs as needed.)
I typically use the highest reasoning level for the initial prompt, and as I get deeper into the data, continuously improve on the model, indexes, etc., and just have codex handle any data migration.
The “Hacker News - Complete Archive” on Hugging Face,[1] recently popped up here. “The data is stored as monthly Parquet files sorted by item ID, making it straightforward to query with DuckDB, load with the datasets library, or process with any tool that reads Parquet.”
Out of curiosity, I tinkered with it using Claude to see trends and patterns (I did find a few embarrassing things about me!).
I don't quite understand how Modolap differs from just asking AI to use any other OLAP engine? Both your website and the github readme just emphasise that it's idiosyncratic and your personal approach, without explaining what that is or why anyone should care.
Appreciate the feedback. I shall certainly revamp the README; it is rather stale.
> "how Modolap differs from just asking AI to use any other OLAP engine"
There presently exist two components, the OLAP query engine and the remote infrastructure service. The service enables systems like Codex (or developers as well) to manage datasets, maintain version control over queries, and offload the computational burden to dedicated machines. This is especially beneficial given the current trend of running agents inside micro-VMs.
In addition, it is designed with AI usage in mind. There is significant value in co-design. One could argue that models can use Polars or DuckDB just as well, and that there is no room for improvement, but I do not think this is true.
I don't get the value proposition either; your landing page is underdeveloped. Tracking the query history is trivial. Offloading computation could be done with Polars Cloud or MotherDuck. Can you expand on the "manage datasets" part?
Could be interesting to chart quality of responses, toxicity/health of conversations, sentiment over time, impact of release of ChatGPT.
(since AI can now answer many questions that might have been topics of conversation; people can use AI to participate; people may be reluctant to participate if AI can data mine everything and link it back to them, etc. similar to Stack Overflow)
I'm kind of surprised that postgres was quite that dominated by mongodb back in the day. I remember the mongo fever, but I always thought postgres held reasonable market share. I guess it was other SQL dbs back then, I guess MySQL was still viable.
It could be that Postgres was so popular that people didn't really discuss it.
Hyperbolic example; literally every human reading this consumes oxygen nearly every moment of the day, and as such no one talks about how great breathing is.
I worked on many projects that had used wrongly mongo instead of ordinary relation database and they needed rework in time. It was just hyped in it's days. Like micro service architecture in few years.
When searching for references to Go, what does it actually look for? "Go" is a relatively common word, and I hardly see anyone referring to it as Golang
30 comments
- inspect and understand the downloaded data in directory /path/..., then come up with an sqlite data model for doing detailed analytics and ingest everything into an sqlite db in data.sqlite, and document the model in model.md.
Then you can query the database adhoc pretty easily with codex prompts (and also generate PDF graphs as needed.)
I typically use the highest reasoning level for the initial prompt, and as I get deeper into the data, continuously improve on the model, indexes, etc., and just have codex handle any data migration.
Out of curiosity, I tinkered with it using Claude to see trends and patterns (I did find a few embarrassing things about me!).
1. https://huggingface.co/datasets/open-index/hacker-news
> "how Modolap differs from just asking AI to use any other OLAP engine"
There presently exist two components, the OLAP query engine and the remote infrastructure service. The service enables systems like Codex (or developers as well) to manage datasets, maintain version control over queries, and offload the computational burden to dedicated machines. This is especially beneficial given the current trend of running agents inside micro-VMs.
In addition, it is designed with AI usage in mind. There is significant value in co-design. One could argue that models can use Polars or DuckDB just as well, and that there is no room for improvement, but I do not think this is true.
(since AI can now answer many questions that might have been topics of conversation; people can use AI to participate; people may be reluctant to participate if AI can data mine everything and link it back to them, etc. similar to Stack Overflow)
Hyperbolic example; literally every human reading this consumes oxygen nearly every moment of the day, and as such no one talks about how great breathing is.
Am I reading that right?