Postgres again elected DBMS of the Year in 2023, but I'm worried

Once again, Postgres has been elected DBMS of the Year by DB-Engines. Although this is only a ranking, it gives a trend. It has also been recognized for several years in Stackoverflow surveys : Most popular Databases. It’s a very much appreciated DBMS, not only by developers and experienced DBAs… but also by directors, who appreciate its stability.1.

Postgres has everything to attract:

  • Year after year, new features are added. I’m thinking of: parallelization, partitioning, logical replication…
  • It has always proved its reliability.
  • Its community keeps growing. Conferences dedicated to Postgres break records every year.2
  • It has even become a standard for other software: editors even add support of the Postgres protocol to facilitate integration: Aurora, AlloyDB, QuestDB…
  • It’s one of the rare community projects of its size. We’ve seen that it’s not enough for the project to be opensource, you also have to rely on the community and its ecosystem.
  • It’s supported by the big players: the GAFAMs employ many committers and contributors. They also sponsor conferences. For example, the next pgDay Paris is sponsored by Microsoft.

However, I’m a little worried about the future. Not for PostgreSQL. Bruce Momjian has given several talks on this subject:

It was even the subject of the opening keynote at the latest PGConf Europe : Simon Riggs: The Next 20 Years (PGConf.EU 2023)

No, I’m more worried about the loss of knowledge in the DBA job.

How useful will Postgres be if you don’t know how to use it?

Table of Contents

History

In the past, there was a strong specialization of skills: developers, testers, study DBA, production DBA, system engineer, storage engineer, backup engineer…

Each specialty evolved in its own sphere, which complicated collaboration and long development cycles.

This organization was replaced by a new way of working: fullstack developers, DEVOPS/SRE, data “somethings” (engineer, analyst, steward…).

Skills have been spread more evenly across the various functions. I see DBAs more and more rarely.

Instead, the database is managed by developers. Or, if you’re lucky, by data engineers or data analysts.

After several years, I’m still struggling to define these professions.

The bottom line: good knowledge of Python, setting up pipelines, assembling a sort of Legos with lots of tools: airflow, dbt… Then, sending this data to Redshift, Biquery, Snowflake…

However, I have the feeling that knowledge of SQL is becoming poorer due to the layers of abstraction.

From the moment you start working with data, the first skill you should have is SQL.

What are we forgetting?

I have the feeling that, year after year, the DBA job is being forgotten.

As a reminder, the DBA is at the crossroads of several paths:

  • Pure database skills:
    • Database design.
    • Mastery of SQL.
    • DBMS expertise: knows how Postgres works (vacuum, checkpoint, etc.). Understands locking mechanisms (MVCC, locks, etc.).
    • Keeps an eye on technical innovations: the Postgres I knew in my early days is a far cry from today’s Postgres.
  • But also more transverse skills:
    • A good knowledge of system administration to investigate performance problems and size resources.
    • Development skills to understand the needs of developers and be able to guide them.
    • Computer science.

Ok, but the cloud!

You might think that the cloud has reshuffled the cards and that we no longer need DBAs. In the list I gave above, which skills are addressed by the cloud?

The cloud solves some of our needs: hosting, maintenance, networking, monitoring (which often needs to be completed with an APM or Datadog like). But it also brings other problems:

  • You need to add a FinOps skill to control and optimize expenses.
  • It’s sometimes a black box, and it can be difficult to investigate performance issues3.
  • This excellent article by Markus Winand also highlights other drawbacks: Sometimes Clouds Bring Rain
    • Changing cloud providers is difficult. I’d even say it’s voluntary on their part.
    • “Just because it is cheap today doesn’t mean it will be cheap forever”.
    • You need an exit plan: avoid proprietary services, keep a small number of services.

Once you’re in the cloud, who’s going to take care of: modeling, query optimization, indexing, DBMS technical watch4, performance investigation, understanding locking problems, developer support?

This is confirmed in my audits, where I often see very simple issues:

  • No primary key.
  • No index on very simple cases.
  • No respect of normal forms, even the most basic ones. JSON hasn’t helped.
  • Spaghetti queries.
  • SQL-89-style joins, even though the JOIN keyword exists since 30 years.

If there’s one thing to remember : The cloud can’t get rid of the DBA. Just because you can get the keys to an airplane doesn’t mean you know how to drive it.

An interesting article about How many DBAs should a company hire?.

We forget the DBA job

We might naively think that we’re assigning these tasks to expert DBAs, but I have more and more doubts. I have the feeling that “we” are losing the knowledge of a job. I’ve come across experienced developers who didn’t even know that the DBA job existed. Recruiters asking me if, as a DBA, I knew how to optimize queries and if I knew SQL. That’s my job! It’s a bit like asking a plumber if he knows how to replace a tap!

What’s even more alarming is that I’m afraid there are also CTOs who lack this knowledge of data.

The consequence is that when performance problems arise, all we do is scale up the resources of the instances. With the cloud, the invoice grows linearly with the size of the instance. If load growth is exponential, the base will have to be scaled, and the bill with it. You end up with an exponential bill.

When you can’t cope with this, you’re going to blame the tool or the model, so you’re going to change DBMS or move to NoSQL. Spoiler: it won’t solve your problems.

We forget the past

For a while, I was thinking of writing a book about query optimization, the common mistakes I see and so on.

Looking at my bookshelf, it’s full of such books. Most of them are between 10 and 30 years old, and the content is still relevant. What’s the point of another book if it’s not known?

Another example comes to mind: data warehouse (DWH). This term is becoming overused. The construction of a DWH is complex, and you have to go through a modeling phase in order to store the data correctly. This ensures good performance and makes it easier to write analytical queries.

Nowadays, I regularly come across data warehouses that simply send all the data to a specialized DBMS (redshift, biquery…). Without any modeling work and with very poorly written queries, sometimes generated because the users don’t know how to do SQL.

These services cost a lot of money and don’t do miracles without in-depth modeling and optimization work.

However, books on DWH and SQL are between 10 and 30 years old. Back then, there was no such thing as the cloud, SSDs or multi-core CPUs. And yet, we knew how to build DWH and create performant websites…

We shouldn’t let the cloud make us forget all this heritage. Otherwise, we run the risk of paying a very high price for the technical debt and loss of skills, plus interest.

It’s important to remember that even if SQL has evolved, and DBMSs have gained in functionalities, the fundamentals are still true.

The code changes, data remains

I had a great manager, a former DBA, who explained to developers: you’re not here to “produce code” => wide-eyed developer. “you’re here to create and use data”.

When there was a tough decision to be made. For example, taking more time to review the application code, making a more complex migration, rather than a “quick win” solution. The same manager explained:"

In 5-10 years, your code will have been rewritten several times, perhaps even in a different language. For us, the data will still be there in 50 years. It’s our duty to make sure it’s still exploitable.

It takes a few minutes to make a wrong decision on a database

And sometimes several months/years to fix it.

I’ve done audits where the model was catastrophic. It had evolved from short-term solutions to other short-term solutions. “We’ll fix it later, we’ve got to get this functionality into the sprint”.

A DBA can correct the data model by a series of major migrations (but you still need a DBA, which is difficult if you don’t even know this job exists…). Beyond that, the problem will be to rewrite a large part of the application code (which is often in the same state as the database).

The complexity is sometimes quadratic or exponential. It’s better to throw everything away and start again.

We are faced with several challenges:

  • Increase instance size.
  • Sometimes, this solution isn’t enough. I explain: “here, you can go 10 times on the instance size, but you can only play this card once”. Note that this solution is useless if the process cannot be parallelized.
  • Throw everything away and start again. Here again, the cost or loss can be insane.

A DBA’s advice at a few points in the project’s life would have been enough to avoid going in the wrong way.

You can’t build a castle on sand. I have an anecdote in mind: someone I know told me a story about a building that had been checked at the time of delivery:

A sample had been taken from the walls and there wasn’t enough cement; the building, which was new, could have collapsed. It had just been completely finished, with electricity, plumbing, woodwork… Everything was finished. It had to be completely razed. There’s insurance for construction, but not for development… I think that some companies don’t survive if the technical debt is too high.

On cloud usage

The cloud has many advantages:

  • You can quickly deploy databases with a relatively clean installation.
  • The cloud provider will force you to stick with supported versions.
  • " Eco-friendly “, it allows you to share physical resources. But it’s also a weakness. Easy access to these resources can also lead to waste. You can easily deploy a hundred instances. Whereas with an on-premise infrastructure, you have to deal with the physical resources available: server power, electrical power, rack space…
  • You pay for what you use. It’s easy to see how much a service costs. A query is responsible for 80% of the bill. This is an incentive to optimize it, since the savings are immediate. But who cares ? I don’t think there are many FinOps out there. If there are, are they thinking about optimizing the database?

But there are also drawbacks:

  • Loss of sovereignty. There are also legal risks. How can you guarantee that your data is not stored outside France? If you’ve chosen a managed service, how can you get out of it if legislation requires you to be on French territory or in another country?
  • Loss of skills.
  • Dependence on a cloud provider: although at this point, new “cloud-agnostic” offers are emerging: companies are creating a cloud on top of another cloud. I’m thinking in particular of Aiven, EDB BigAnimal, Crunchy Bridge, and I’m certainly missing ones…5
  • Costs skyrocket when infrastructure is important. Some are starting to leave the cloud. Basecamp comes to mind:

What we need to change

  • Give value back to data: some companies are reluctant to hire DBAs or sign support contracts for their databases. What is the cost of service downtime, data corruption or database loss?
  • Reinvest the scope of database skills: SQL, modeling6. About this, I really like Markus Winand’s title: SQL Renaissance Ambassador.
  • Take a closer look at the use of resources: digital energy consumption is high, and if servers and electricity were more expensive, we’d be forced to optimize.
  • Monitor spending on the databases. Both in the cloud and on-premise. To do this, you need cost indicators.
  • Don’t forget to ask DBAs to anticipate problems and look into performance issues: I’ve audited databases in a disastrous condition, which could have been avoided if a DBA had been involved in the design stage. On performance problems, I’ve already been able to cut cloud invoices by a factor of 10, or reduce the average load from 80% to 5% with regular checks.
  • You can’t resign yourself to paying large cloud bills rather than investing in DBA skills.

To conclude

This is a less technical article than you’re used to reading on my blog. However, it’s one of the most important I’ve ever written. I’ve been using Postgres for over ten years now, and I realize that the trend is not going in the right direction. I hope there will be an awareness for a more sustainable database future.


  1. It has been added in a list of Opensource softwares recommended in the French public administration, Socle interministériel de logiciels libres↩︎

  2. 720 peoples at the PGEurope 2023 ↩︎

  3. AWS EBS latency and IOPS: The surprising truth

    Ultimately, due to AWS’ opacity, there is simply no way to know how much throughput (from the physical disks and from the network that sits in-between) to expect for a given EBS volume. Provisioned IOPS only offer a partial solution to this issue, at a higher hourly cost.

     ↩︎
  4. Today’s Postgres is a far cry from the Postgres I began with:

    • New features related to the SQL standard: JSONPath, for example. Markus Winand published a great website on this subject: Modern SQL
    • New features: parallelism, partitioning, new execution nodes, logical replication…
    • As a result, the number of parameters increased.
    • New extensions.
     ↩︎
  5. I haven’t worked with these companies, so I don’t have any feedback. It’s worth noting that these companies hire developers who contribute to Postgres. ↩︎

  6. A few years ago, an article was based on a conference. Whose title is soo true : Database constraints in Postgres: The last line of defense. Here is the video : Constraints: a Developer’s Secret Weapon - Will Leinweber and the slides↩︎

Adrien Nayrat
Adrien Nayrat
PostgreSQL DBA Freelance

Postgres and opensource passionate