What's new in PostgreSQL 13?

Are you a superstitious sort? PostgreSQL 13 was released on the 24th of September. Should you avoid it like a black cat under a ladder with an open umbrella indoors? Or does it carry on the PostgreSQL tradition of ever increasing speed and reliability?

In my opinion, the latest release has some extremely useful features that make it a solid upgrade. Here’s the breakdown of what makes it a cut above its predecessor.

Development is in the hands of the developer.

It’s faster

The official press release of PostgreSQL 13 leads with performance gains as its main benefit. While not all of those improvements will be meaningful to everyone, there are two that I think are most useful.

First off, the balanced-tree index is significantly faster when dealing with duplicate records. This means that if you have an index on a column that contains lots of duplicates — for example, searching for users by first name or city name, or perhaps by join date — the search process is significantly faster. Anything where many rows share the same data gets a boost, and in the kind of massive databases that we get today, those speed gains can really add up.

Secondly, PostgreSQL finally implemented incremental sorting, where sorted data from an earlier step in a query can be used later in the query. This can help reduce memory usage if sorting by two columns. In theory the data can be chunked, by the first column, and then sorted inside those chunks, meaning that not all the working set must be in memory at the same time. This will be useful when combined with the OFFSET and LIMIT keywords.

Speed Tunnel
Take your app acceleration to the next level.

Administration is nicer

PostgreSQL 13’s done a lot to improve database administration. There are some upgrades to table vacuuming, and some new commands have been added to simplify monitoring database activity and verifying backups. I’ll focus on just the improvements to the vacuuming process, as I think these are the most useful additions.

To quote the press release, “Vacuuming is an essential part of PostgreSQL administration”.

I have different thoughts on the vacuuming process, namely that “Vacuuming is the worst part of PostgreSQL administration”.

If you don’t keep up with vacuuming (for example, if you have a write-heavy database that doesn’t get a chance to breathe), your database can get to a state where it goes into read-only mode until you perform the VACUUM. Vacuuming can then takes hours or days to complete, and you’re stuck offline until it’s done.

If you don’t do it regularly, the dust builds up on you.

PostgreSQL 13 eases this pain by adding parallelised vacuums for indexes so that the index vacuuming process can be split across multiple CPU cores. There is still a limit of one process per index, so this will only benefit you if you have multiple indexes in a table. Nevertheless, anything that can speed up vacuuming and database maintenance is a welcome addition.

Development is more convenient

PostgreSQL has always had an active community of extension-writers, but it’s always nice when the most useful stuff gets integrated into the core framework. This time, a few of the more useful functions are now built-in, such as a UUIDv4 function gen_random_uuid() which is available without having to install any extensions.

Previously the equivalent uuid_generate_v4() was only available through manually installing the uuid-ossp extension. Sure, it’s easy enough to install on earlier PostgreSQL versions (just run this, as a database superuser):


Installing this extension was simple, but aside from adding the extra hoops, it required the user to know that the extension existed in the first place and requires a superuser on hand to perform the install. It’s nice to have those extra steps removed.

Keyboard Shortcuts
Keyboard shortcuts are still our favorite, but these are great too.

PostgreSQL has had JSON storage and querying support for a while. (I think that JSON support was only truly solidified in the 9.4 release.) Schema-less JSON records can indeed be stored, queried and filtered in a JSON or JSONB column. Ironically, this has allowed PostgreSQL to hold its own as a “NoSQL” database (such as MongoDB).

New to Version 13 is the datetime() function, that converts ISO 8601 strings in JSON objects into PostgreSQL-native types. This means you can run queries and filters on them in the database without having to transfer the data to your application and converting there. Usually running these processes directly on the database will be faster than running them through your application code, so you’ll likely see some speedups by doing this.

Security changes

As mentioned in the previous section, extensions could only be installed by superusers. This meant delays in getting the features you need if you had to wait for an admin user somewhere to do the install.

In PostgreSQL 13, admins can mark extensions as “trusted”, and a normal user can install these “trusted” extensions to their database. This should help with the aforementioned process delay.

There are also security improvements when using SCRAM authentication.

Chained Devices
There’s always a fine balance between security and usability.

When can I use it?

Right now! If you run your own database server, you can download it from the usual place. In my experience PostgreSQL upgrades are usually quite smooth, and there are multiple methods of upgrading to PostgreSQL 13.

If you’ve got a cloud hosted/managed database server, you’re going to have to wait a bit longer. At the time of writing, Azure supports PostgreSQL 11 and AWS RDS supports PostgreSQL 12. If I were a betting man, my money would be on AWS supporting PostgreSQL 13 first, but there’s no indication of when that might happen.


While PostgreSQL 13 isn’t a monumental upgrade, it does have a few nice features. I wouldn’t say upgrading is a must, but getting extra performance for free is nice.

PostgreSQL is one of the projects that I love and never hesitate to use. It’s easy to set up yet fast and robust. I’ve never run into issues with it and can keep running well for large scale deployments. Once it’s running, it pretty much gets out of your way – just don’t forget to vacuum!

And no, I’m not paid by PostgreSQL, I just think it’s great, consistent software.

At Tera Shift, we can help with your PostgreSQL database needs. Whether it’s a new setup, administration or migration, we’ve been doing it for over a decade.

About Tera Shift

Tera Shift Ltd is a software and data consultancy. We help companies with solutions for development, data services, analytics, project management, and more. Our services include:

  • Working with companies to build best-practice teams
  • System design and implementation
  • Data management, sourcing, ETL and storage
  • Bespoke development
  • Process automation

We can also advise on how custom solutions can help your business grow, by using your data in ways you hadn’t thought possible.

About the author

Ben Shaw (B. Eng) is the Director of Tera Shift Ltd. He has over 15 years’ experience in Software Engineering, across a range of industries. He has consulted for companies ranging in size from startups to major enterprises, including some of New Zealand’s largest household names.

Email ben@terashift.co.nz