- Improvements to replication
- Parallel query
- Index-organized tables
- Better partitioning
- Better full-text search
- Logical streaming replication
What we need to focus on is the reasons why people don't use PostgreSQL at all. Only by exploiting new markets -- by pushing Postgres into places which never had a database before -- do we grow the future PostgreSQL community. And there's a bunch of ways we are failing new users.
For example, listen to Nelson Elhage, engineer at Stripe.com:
"I love Mongo's HA story. Out of the box I can build a 3-node Mongo cluster with a full replica set. I can add nodes, I can fail over, without losing data."Wouldn't it be nice if we could say the same thing about Postgres? But we can't.
If we're looking for a #1 PostgreSQL development priority, this is it:
We need a "scale it now" button.
This is where we're losing ground to the new databases. In every other way we are superior: durability, pluggability, standards-compliance, query sophistication, everything. But when a PostgreSQL user outstrips the throughput of a single server or a single EC2 instance, our process for scaling out sucks. It's complicated. It has weird limitations. And most of all, our scale-out requires advanced database expertise, which is expensive and in short supply.
We need some way for users to go smoothly and easily from one Postgres node to three and then to ten. Until we do that, we will continue to lose ground to databases which do a better job at scaling, even if they suck at everything else.
I don't know what our answer to scaling out will be. Streaming logical replication, PostgresXC, Translattice, Hadapt, something else ... I don't know. But if we don't answer this vital question, all our incremental improvements will amount to naught.