SQLite is insanely robust. I have developed websites serving hundreds of thousands of daily users where the storage layer is entirely handle by SQLite, via an abstraction layer I built that gives you a handy key-value interface so I don't have to craft queries when I just need data storage/retrieval: https://github.com/aaviator42/StorX
We use SQLite IMMEDIATE transactions, which lock files for writes for a few milliseconds while commiting data to the file. This is not a problem in practice until you reach more than dozens of concurrent writers. StorX configures a default busy timeout of 1.5s, but it can be configured as per your needs. You can also get a lot more out of it by being smart about how you spread your data over DB files (eg: one file per user instead of one for multiple/all users), and also by considering when you call openFile() and closeFile() (eg: keep write transactions short, don't leave a file handler open while running long calculations).
STRICT tables are something I appreciate very much, even though I cannot recall running into a problem that would have prevented by its presence in the before-time. But it's good to have all the same.
I don't think I've ever done much with SQLite's JSON functions, but I have on one or two occasions used a constraint to enforce a TEXT column contains valid JSON, which would have been very tedious to do otherwise.
even though I cannot recall running into a problem that would have prevented by its presence in the before-time
I very, very much did. I was using a Python package that used a lot of NumPy internally, and sometimes its return values would be Python integers, and sometimes they'd be NumPy integers.
The Python integers would get written to SQLite as SQLite integers. The NumPy integers would get written to SQLite as SQLite binary blobs. Preventing you from doing simple things like even comparing for equal values.
Setting to STRICT caused an error whenever my code tried to insert a binary blob into an integer column, so I knew where in the code I needed to explicitly convert the values to Python integers when necessary.
On the STRICT mode, I've asked this elsewhere and never gotten an answer: does anyone have a loose-typing example application where SQLite's non-strict, different-type-allowed-for-each-row has been a big benefit? I love the simplicity of SQLite's small number of column types, but the any-type-allowed-anywhere design always seemed a little strange.
Flexible typing works really well with JSON, which is also flexibly typed. Are you familiar with the ->> operator that extracts a value from JSON object or array? If jjj is a column that holds a JSON object, then jjj->>'xyz' is the value of the "xyz" field of that object.
I copied the idea for the ->> operator from PostgreSQL. But in PostgreSQL, the ->> operator always returns a text rendering of the value from the JSON, even if the value is really an integer or floating point number. PG is rigidly typed, so that's all it can do. But SQLite is flexibly typed, so the ->> operator can return anything - text, integer, floating-point, NULL - whatever value if finds in the JSON.
When your application's design changes, you may need to store a slightly different type of data. Relational databases traditionally require explicit schema changes for this, whereas NoSQL databases allow more flexible, schema-less data. SQLite sits somewhere in between: it remains a relational database, but its dynamic typing allows you to store different types of values in a column without immediately migrating data to a new table.
This flexibility is convenient when only one application reads and writes to the table. But if multiple applications access the same tables, the lack of a strictly enforced schema becomes a liability. The same is true when using generic tools to process data in SQLite tables, because such tools don't know what type of data to expect. The column type may be X but the actual data may be of type Y.
Not necessarily, but being able to specify types beyond those allowed by STRICT tables is useful.
Ideally, I'd like to be able to specify the stored type (or at least, side step numeric affinity), and give the type a name (for introspection, documentation).
Specifying that a column is a DATETIME, a JSON, or a DECIMAL is useful, IMO.
Alas, neither STRICT nor non-STRICT tables allow this.
I remember an ORM that uses fake column types like boolean and datetime because SQLite doesn't enforce anything. That way the ORM knows how to deserialize the data. Strict mode prohibits this by only accepting column types SQLite recognizes.
My preference would be for SQLite to actually support commonplace data types. But as long as it doesn't, I can see the appeal in using the schema to specify what data you're storing in your database.
SQLite seems very powerful for building FTS (user enters free text, expects high precision/recall results). Still, I feel like it's non-trivial to get good search quality.
I think the naive approach is to tokenize the input and append "*" for prefix matching. I'm not too experienced and this can probably be improved a lot. There are many settings like different tokenizers, stemming, etc. Additionally, a lot can be built on top like weighting, boosting exact matches, etc.
Does anyone know good resources for this to learn and draw inspiration from?
> Does anyone know good resources for this to learn and draw inspiration from?
Is there a reason why something more custom built, like ParadeDB Community edition won't meet your needs?
I understand you're speaking about SQLite, while ParadeDB is PostgreSQL but as you know, it's non-trivial to get good search quality, so I'm trying to understand your situation and needs.
I mean you can use sqlite as an index and then rebuild all of Lucene on top of it. It's non-trivial to build search quality on top of actual search libraries too.
O'Reilly's "Relevant Search" isn't the worst here, but you'll be porting/writing a bit yourself.
They recently landed multi-writer support for their rust SQLite re-implementation, which is personally the biggest issue I've had with using SQLite for high concurrency applications.
Not sure if people interested, but since I use sqlite in a lot of my own projects, I am working on a lightweight monitoring and safety layer for production SQLite.
The idea is pretty simple: SQLite is amazing, but once it’s running in production you basically have zero observability. If something weird happens (unexpected writes, schema changes, background jobs touching tables, etc.) you only find out after the fact. It tries to solve that without touching application code. It's a Rust agent that runs next to your sqlite file, and connects to the server where everything is logged in. My current challenge right now is encryption and trust, mostly.
Curious if others here are running SQLite in production and if you would be interested in something like this.
In the past I've used the backup API - https://sqlite.org/backup.html - in order to load in memory a copy of sqlite db, and have another live one. I would do this after certain user action, and then by doing a diff, I would know what changed... I guess poor way of implementing PostgreSQL events... but it worked!
Granted it was small DB (few megabytes), I also wanted to avoid collecting changes one by one, I simply wanted a diff over last time.
I've found FTSE5 not useful for serious fuzzy or subword full text search. For example I have documents saying "DaemonSet". But if the user searches for "Daemon" then there will be no results.
the JSON functions are genuinely useful even for simple apps. i use sqlite as a dev database and being able to query JSON columns without a preprocessing step saves a lot of time. STRICT tables are also great, caught a bug where I was accidentally inserting the wrong type and it just silently worked in regular mode
For a long time I absolutely hated SQLite because of how terribly it was implemented in Emby, which made it so you couldn't load balance an Emby server because they kept a global lock on the database for exactly one process, but at this point I've grown a kind of begrudging respect for it, simply because it is the easiest way to shoehorn something (roughly) like journaling in terrible filesystems like exFAT.
I did this recently for a fork of the main MiSTer executable because of a few disagreements with how Sorg runs the project, and it was actually pretty easy to change out the save file features to use SQLite, and now things are a little more resistant to crashes and sudden power loss than you'd get with the terrible raw-dogged writing that it was doing before.
It's fast, well documented, and easy to use, and yeah it has a lot more features than people realize.
I actually needed that exact window function example earlier this week when I needed to figure out why our shared YNAB budget somehow got out of balance with the bank. SQLite to load the different CSVs and lay out the bank's view of the world against YNAB's with running totals was what I turned to.
None of these are news to the HN community. Write-ahead logging and concurrency PRAGMAs have been a given for a decade now. IIRC, FTS5 doesn't often come baked in and you have to compile the SQLite amalgamation to get it. If you do need better typing, you should really use PostgreSQL.
However, I will concede, and the article doesn't mention at all, far less are aware that you can build HA, cross region replicated SQLite using purely OSS software provided you architect your software around it. Now that would be a really good Modern SQLite: Features You Didn't Know It Had article!
Another interesting discussion point is how far self hosted PostgreSQL and pgBackRest can get you to a near-zero data loss high RPO, RTO setup. Its simply amazing we can self host all this.
I wish SQLite would add a bool type and proper date/time types. Is there really no plan to add them?
For bool, it could just be an alias of a numeric type.
Something equivalent to number check(col = 0 or col = 1) would be perfectly fine.
Date/time handling is pretty weak.
Having to store values as GMT text is just inconvenient.
When retrieving values in Node.js, I ended up using new Date(val), which caused the classic bug where a GMT-stored value gets interpreted in the local timezone.
The correct approach was new Date(val + ".Z"), but I really don’t want to deal with that kind of hassle anymore.
63 comments
I don't think I've ever done much with SQLite's JSON functions, but I have on one or two occasions used a constraint to enforce a TEXT column contains valid JSON, which would have been very tedious to do otherwise.
>
even though I cannot recall running into a problem that would have prevented by its presence in the before-timeI very, very much did. I was using a Python package that used a lot of NumPy internally, and sometimes its return values would be Python integers, and sometimes they'd be NumPy integers.
The Python integers would get written to SQLite as SQLite integers. The NumPy integers would get written to SQLite as SQLite binary blobs. Preventing you from doing simple things like even comparing for equal values.
Setting to STRICT caused an error whenever my code tried to insert a binary blob into an integer column, so I knew where in the code I needed to explicitly convert the values to Python integers when necessary.
On the STRICT mode, I've asked this elsewhere and never gotten an answer: does anyone have a loose-typing example application where SQLite's non-strict, different-type-allowed-for-each-row has been a big benefit? I love the simplicity of SQLite's small number of column types, but the any-type-allowed-anywhere design always seemed a little strange.
I copied the idea for the ->> operator from PostgreSQL. But in PostgreSQL, the ->> operator always returns a text rendering of the value from the JSON, even if the value is really an integer or floating point number. PG is rigidly typed, so that's all it can do. But SQLite is flexibly typed, so the ->> operator can return anything - text, integer, floating-point, NULL - whatever value if finds in the JSON.
This flexibility is convenient when only one application reads and writes to the table. But if multiple applications access the same tables, the lack of a strictly enforced schema becomes a liability. The same is true when using generic tools to process data in SQLite tables, because such tools don't know what type of data to expect. The column type may be X but the actual data may be of type Y.
Ideally, I'd like to be able to specify the stored type (or at least, side step numeric affinity), and give the type a name (for introspection, documentation).
Specifying that a column is a DATETIME, a JSON, or a DECIMAL is useful, IMO.
Alas, neither STRICT nor non-STRICT tables allow this.
> the any-type-allowed-anywhere design always seemed a little strange.
Sqlite came from TCL which is all strings. https://www.tcl-lang.org/
An example of where this would be a benefit is if you stored date/times in different formats (changing as an app evolved.)
booleananddatetimebecause SQLite doesn't enforce anything. That way the ORM knows how to deserialize the data. Strict mode prohibits this by only accepting column types SQLite recognizes.My preference would be for SQLite to actually support commonplace data types. But as long as it doesn't, I can see the appeal in using the schema to specify what data you're storing in your database.
I think the naive approach is to tokenize the input and append "*" for prefix matching. I'm not too experienced and this can probably be improved a lot. There are many settings like different tokenizers, stemming, etc. Additionally, a lot can be built on top like weighting, boosting exact matches, etc.
Does anyone know good resources for this to learn and draw inspiration from?
> Does anyone know good resources for this to learn and draw inspiration from?
Is there a reason why something more custom built, like ParadeDB Community edition won't meet your needs?
I understand you're speaking about SQLite, while ParadeDB is PostgreSQL but as you know, it's non-trivial to get good search quality, so I'm trying to understand your situation and needs.
O'Reilly's "Relevant Search" isn't the worst here, but you'll be porting/writing a bit yourself.
They recently landed multi-writer support for their rust SQLite re-implementation, which is personally the biggest issue I've had with using SQLite for high concurrency applications.
PRAGMA journal_mode = 'mvcc';https://docs.turso.tech/tursodb/concurrent-writes
Very excited to see if SQLite responds by adding native support, I'm hoping competition here will spur improvements on both sides.
Curious if others here are running SQLite in production and if you would be interested in something like this.
Granted it was small DB (few megabytes), I also wanted to avoid collecting changes one by one, I simply wanted a diff over last time.
And ON CONFLICT which can help dedupe among other things in a simple and performant way.
https://sqlite.org/json1.html#table_valued_functions_for_par...
[1] https://news.ycombinator.com/item?id=47618597
I did this recently for a fork of the main MiSTer executable because of a few disagreements with how Sorg runs the project, and it was actually pretty easy to change out the save file features to use SQLite, and now things are a little more resistant to crashes and sudden power loss than you'd get with the terrible raw-dogged writing that it was doing before.
It's fast, well documented, and easy to use, and yeah it has a lot more features than people realize.
I did not know SQLite allows writing data that does not match the column type. Yuck. Now I need to review anything I built and fix it.
I understand why they wouldn’t, but STRICT should be the default.
However, I will concede, and the article doesn't mention at all, far less are aware that you can build HA, cross region replicated SQLite using purely OSS software provided you architect your software around it. Now that would be a really good
Modern SQLite: Features You Didn't Know It Hadarticle!Another interesting discussion point is how far self hosted PostgreSQL and pgBackRest can get you to a near-zero data loss high RPO, RTO setup. Its simply amazing we can self host all this.
For bool, it could just be an alias of a numeric type. Something equivalent to number check(col = 0 or col = 1) would be perfectly fine.
Date/time handling is pretty weak. Having to store values as GMT text is just inconvenient.
When retrieving values in Node.js, I ended up using new Date(val), which caused the classic bug where a GMT-stored value gets interpreted in the local timezone.
The correct approach was new Date(val + ".Z"), but I really don’t want to deal with that kind of hassle anymore.