This brings me back to my first startup job supporting a software that was used by several large corporations you’ll know.
Configuration and data entry was done by directly manipulating the prod database, with commands that were stored locally in text files, pre-written in the SQL Manager or created from Excel macros.
If you fucked up, you copied over the parts you had changed from the testing db, which doubled as the only backup.
If you started a transaction and forgot to commit or roll back, the application on the customer’s end would lock up.
The web frontend was based on hand-written html+css spaghetti code and Microsoft Silverlight.
Without any prior job experience or SQL knowledge, I got an account with domain admin rights and full write access to prod on my first day.
Only 2 people could work on the server at a time cause they didn’t want to pay for terminal server licenses.
Someone explain the joke to me please!
There is a subtle scenario with read-modify-write transactions in MVCC where SQLite lacks some grace (in my opinion).
In MVCC, transactions work with a point-in-time (read “between atomic transactions”) consistent “read snapshot” of the database.
Consider this example:
- Transaction A begins and reads from table
foo. - Transaction B begins and updates table
foo. - Both transactions commit.
There is no conflict here because these transactions are isolated from each other via the snapshot mechanism. Transaction A’s read snapshot is immutable and will not see any writes from transaction B, even if they are happening concurrently.
Now what happens in this example (from the OP):
- Transaction A begins and reads from table
foo. - Transaction B begins and updates table
foo. - Transaction B commits.
- Transaction A tries to update
foo.
This is a true conflict because both transactions are trying to write to
foo, and transaction A’s writes might be based on what it just read. There is no consistent way for A to proceed, because B already wrote tofoo, invalidating A’s read snapshot.So SQLite handles this by returning an error to A, effectively requiring A to restart the transaction.
There are other ways this could be handled though. The DB could optimistically retry the transaction for you. There is even a special
BEGIN IMMEDIATE;statement that it could use to proactively take a write lock onfooso that the transaction doesn’t get starved by other writers. But SQLite puts all of the responsibility on users to handle this.I’m not an expert, so there could be a very good reason that SQLite works this way, but it feels a bit annoying as a user.
I don’t actually know off the top of my head how PostgresQL handles this particular scenario.
SQLite puts all of the responsibility on users to handle this
Thanks for the explanation.
I feel like the one who decided this behaviour, expected the users of the database, to make their own system of prioritising the transactions.More likely: SQLite is built to be small, simple and lightweight, not to be super highly concurrent.
If this situation happens rarely, just make sure you have a retry on the query. If it happens often, switch to postgres.
- Transaction A begins and reads from table
SQLite doesn’t do highly concurrent tasks. Your life will be much, much better if you don’t even try.
It also doesn’t do Windows shared files, because any access into Windows shared files is highly concurrent on the speeds Windows is able to manage its shares.


