Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have they fixed the incredibly slow queries on indexed columns?

https://www.lukas-barth.net/blog/sqlite-duckdb-benchmark/



Howdy! Thanks for your benchmarking!

Your blog does a great job contrasting the two use cases. I don't think too much has changed on your main use case, however here are a few ideas to test out!

DuckDB can read SQLite files now! So if you like DuckDB syntax or query optimization, but want to use the SQLite format / indexes, that may work well.

Since DuckDB is columnar (and compressed), it frequently needs to read a big chunk of rows (~100K) just to get 1 row out and decompressed. Mind trying to store your data uncompressed? Might help in your case! (PRAGMA force_compression='uncompressed')

Links: https://duckdb.org/docs/extensions/sqlite


> The differences between DuckDB and SQLite are just always so large that plotting them together completely hides all other detail.

(From your blog post). When values span orders of magnitude, that’s when log plots are useful.


They say they have lots of benchmarks running for it, so it might be a good idea to add a similar benchmark directly to be able to track it?

By the way, I like your blog's style! Even the html of an article is clean and readable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: