Originally posted: 2025-03-16. View source code for this page here.
Over the past few years, I've found myself using DuckDB more and more for data processing, to the point where I now use it almost exclusively, usually from within Python.
We're moving towards a simpler world where most tabular data can be processed on a single large machine1 and the era of clusters is coming to an end for all but the largest datasets.2
This post sets out some of my favourite features of DuckDB that set it apart from other SQL-based tools. In a nutshell, it's simple to install, ergonomic, fast, and more fully featured.
An earlier post explains why I favour SQL over other APIs such as Polars, pandas or dplyr.
DuckDB is an open source in-process SQL engine that is optimised for analytics queries.
The performance difference of analytics-optimised engines (OLAP) vs. transactions-optimised engines (OLTP) should not be underestimated. A query running in DuckDB can be 100 or even 1,000 times faster than exactly the same query running in (say) SQLite or Postgres.
A core use-case of DuckDB is where you have one or more large datasets on disk in formats like csv
, parquet
or json
which you want to batch process. You may want to perform cleaning, joins, aggregation, derivation of new columns - that sort of thing.
But you can also use DuckDB for many other simpler tasks like viewing a csv file from the command line.
DuckDB consistently benchmarks as one of the fastest data processing engines. The benchmarks I've seen3 show there's not much in it between the leading open source engines - which at the moment seem to be polars, DuckDB, DataFusion, Spark and Dask. Spark and Dask can be competitive on large data, but slower on small data.
DuckDB itself is a single precompiled binary. In Python, it can be pip install
ed with no dependencies. This makes it a joy to install compared to other more heavyweight options like Spark. Combined with uv
, you can stand up a fresh DuckDB Python environment from nothing in less than a second - see here.
With its speed and almost-zero startup time, DuckDB is ideally suited for CI and testing of data engineering pipelines.
Historically this has been fiddly and running a large suite of tests in e.g. Apache Spark has been time consuming and frustrating. Now it's much simpler to set up the test environment, and there's less scope for differences between it and your production pipelines.
This simplicity and speed also applies to writing new SQL, and getting syntax right before running it on a large dataset. Historically I have found this annoying in engines like Spark (where it takes a few seconds to start Spark in local mode), or even worse when you're forced to run queries in a proprietary tool like AWS Athena.4
There's even a DuckDB UI with autocomplete - see here.
The DuckDB team has implemented a wide range of innovations in its SQL dialect that make it a joy to use. See the following blog posts 1 2 3 4 5 6.
Some of my favourites are the EXCLUDE
keyword, and the COLUMNS
keyword which allows you to select and regex-replace a subset of columns.5 I also like QUALIFY
and the aggregate modifiers on window functions, see here.
Another is the ability to function chain, like first_name.lower().trim()
.
You can query data directly from files, including on s3, or on the web.
For example to query a folder of parquet files:
select *from read_parquet('path/to/*.parquet')
or even (on CORS enabled files) you can run SQL directly:
select *from read_parquet('https://raw.githubusercontent.com/plotly/datasets/master/2015_flights.parquet')limit 2;
Click here to try this query yourself in the DuckDB web shell.
One of the easiest ways to cause problems in your data pipelines is to fail to be strict about incoming data types from untyped formats such as csv. DuckDB provides lots of options here - see here.
Many data pipelines effectively boil down to a long sequence of CTEs:
WITHinput_data AS (SELECT * FROM read_parquet('...')),step_1 AS (SELECT ... FROM input_data JOIN ...),step_2 AS (SELECT ... FROM step_1)SELECT ... FROM step_2;
When developing a pipeline like this, we often want to inspect what's happened at each step.
In Python, we can write
input_data = duckdb.sql("SELECT * FROM read_parquet('...')")step_1 = duckdb.sql("SELECT ... FROM input_data JOIN ...")step_2 = duckdb.sql("SELECT ... FROM step_1")final = duckdb.sql("SELECT ... FROM step_2;")
This makes it easy to inspect what the data looks like at step_2
with no performance loss, since these steps will be executed lazily when they're run all at once.
This also facilitates easier testing of SQL in CI, since each step can be an independently-tested function.
DuckDB offers full ACID compliance for bulk data operations, which sets it apart from other analytical data systems - see here. You can listen to more about this on in this podcast, transcribed here.
This is a very interesting new development, making DuckDB potentially a suitable replacement for lakehouse formats such as Iceberg or Delta lake for medium scale data.
A longstanding difficulty with data processing engines has been the difficulty in writing high performance user defined functions (UDFs).
For example, in PySpark, you will generally get best performance by writing custom Scala, compiling to a JAR, and registering it with Spark. But this is cumbersome and in practice, you will encounter a lot of issues around Spark version compatibility and security restrictions environments such as DataBricks.
In DuckDB high performance custom UDFs can be written in C++. Whilst writing these functions is certainly not trivial, DuckDB community extensions offers a low-friction way of distributing the code. Community extensions can be installed almost instantly with a single command such as INSTALL h3 FROM community
to install hierarchical hexagonal indexing for geospatial data.
The team provide documentation as a single markdown file so it can easily be provided to an LLM.
My top tip: if you load this file in your code editor, and use code folding, it's easy to copy the parts of the documentation you need into context.
Much of this blog post is based on my experience supporting multiple SQL dialects in Splink, an open source library for record linkage at scale. We've found that transitioning towards recommending DuckDB as the default backend choice has increased adoption of the library and significantly reduced the amount of problems faced by users, even for large linkage tasks, whilst speeding up workloads very substantially.
We've also found it's hugely increased the simplicity and speed of developing and testing new features.
pg_duckdb
allows you to embed the DuckDB computation engine within Postgres.The later in particular seems potentially extremely powerful, enabling Postgres to be simultanouesly optimised for analytics and transactional processing. I think it's likely to see widespread adoption, especially after they iron out a few of the current shortcomings around enabling and optimising the use of Postgres indexes and pushing up filters up to PostGres.