Tuesday, August 24, 2021

Failover triggered by PostgreSQL shutdown

 

Photo by brisch27

Ultra fast PostgreSQL down detection

If a user connects to Pgpool-II and one of PostgreSQL servers is stopped by admin, Pgpool-II detects  it and triggers failover. Since this happens before health check detects the server down, the failover process is pretty fast. This feature looks great at a first glance.

How does Pgpool-II implement it?

How is it possible? When PostgreSQL is shutdown by admin, PostgreSQL sends an error, and it has special error code: 57P01. If Pgpool-II receives the error code, Pgpool-II starts the failover process.

Problem is, there's other case when 57P01 is used. When pg_terminate_backend() is used, or PostgreSQL backend process is killed by signal, exactly the same error code is sent from PostgreSQL. This confuses Pgpool-II and it triggers failover in this case too!

Pgpool-II 4.3 will mitigate the problem

To mitigate the problem upcoming Pgpool-II 4.3, supposed to be released in 2021 fall, will have a new parameter "failover_on_backend_shutdown". If this is on, Pgpool-II will behave exactly as it is now. If it's off, 57P01 does not trigger failover any more. Of course you cannot enjoy the quick failover feature in this case. Sorry but there's no way to have both the quick failover feature and and not to be confused by the error code. So choice is up to you.

I hope someday PostgreSQL becomes clever enough to use distinct error codes for the admin shutdown case and the pg_terminate_backend() case.

Friday, June 18, 2021

Promoting specified node in Pgpool-II

 

image by Gerd Altmann from Pixabay

Promoting a standby node

 Pgpool-II manages a streaming replication primary node and multiple standby nodes. Suppose we shutdown the primary node. Pgpool-II detects the event and starts a command called "failover command". The failover command is a user supplied script (usually written in shell or other scripting language), and it choose one of standby nodes to be promoted. Typical script chooses the "next main node", which is the next live node to the former primary node: e.g. if the former primary node is 0, and there are node 1 and 2 standby nodes, node 1 will chosen.

Another way to promote a standby node is to use "pcp_detach_node". The command is one of the control commands for Pgpool-II. pcp_detach_node can "detach" a node so that Pgpool-II  changes the internal status of the specified node to be down (but the node is actually up and running). In this case the failover command is also called and the next node - in the example above node 1 - will be chosen.

Promoting specified node

What if we want to promote node 2, instead of node 1? Actually there had been no way to achieve this until today. The next version of Pgpool-II 4.3, supposed to be released in this winter will allow you do it.

Existing pcp_promote_node command now has extra argument: --switchover or -s.

pcp_promote node --switchover 2

will do followings:

  1.  detach the current primary node. This makes the current primary node to be in "down" status (but the node is actually up and running).
  2.  failover command is triggered. Node 2 is passed to the failover command as the "new main node".
  3.  failover command promotes node 2.
  4.  "follow primary command" runs against node 0 and node 1. Follow primary command is a user supplied script to make a node to be standby node, "following" current primary.
  5. As a result, node 0, 1 are standby following new primary node 2.

Conclusion

Pgpool-II 4.3 will have one of the long awaited features: promoting specified node.  I hope this gives admins more flexible way to manage PostgreSQL clusters.



Sunday, May 30, 2021

Dropping support for version 2 protocol

 Peter H

Upcoming PostgreSQL 14, supposed to be released by the end of 2021, will remove the support for version 2 frontend/backend protocol (in this article I refer it as "v2" or "v2 protocol").

What is  version 2 frontend/backend protocol?

It's a one of communication protocols used between clients and PostgreSQL. For example, to send a query to PostgreSQL, clients first send a letter 'Q' then the actual SQL query string. The letter 'Q' indicates the client wants to send a query to PostgreSQL. In 2003 PostgreSQL 7.4 was released and started to support newer protocol v3. From PostgreSQL 7.4 to PostgreSQL 13, they support both older protocol version 2 and newer version of protocol v3. Since v3 protocol is better than v2, especially in that v3 supports extended query which allows to use prepared statements, the support for v2 is just for  a backward compatibility sake. Since PostgreSQL 7.4 was released in 2003, PostgreSQL keeps the v2 support for 18 years!

Any trouble caused by dropping v2 protocol?

 Good question. If you are using older drivers and the driver use v2 with your application, you may get into a trouble because PostgreSQL 14 (and newer) will not accept your application any more.  For example, if you are using older version of JDBC driver (older than Version 9.4.1210 released in 2016-09-07, "fix: support cases when user-provided queries have 'returning' PR#488 c3d8571" is the commit message which drops the v2 support), the chances are some of the code of the application may choose v2 protocol. If you are using such older drivers, I recommend you to test to connect to PostgreSQL 14 using the application with the driver. As of today, PostgreSQL 14 beta1 is available.

What about Pgpool-II?

Pgpool-II supports both v2 and v3 protocol as of Pgpool-II 4.2. For upcoming Pgpool-II 4.3 which is supposed to be released by the end of 2021, developers have not decided yet the support for v2 will be dropped in 4.3 or not. Dropping v2 support code will remove  non-negligible amount code from Pgpool-II and it makes developers' life a little bit easier. If nobody is using v2 protocol with Pgpool-II, we can happily remove the v2 code. If you are still using v2 protocol with Pgpool-II, please reply back to the mailing list  thread.

 


Monday, April 19, 2021

Visibility with query cache

 

 Photo by Free-Photos

Pgpool-II provides "query cache" feature which speeds up SELECT to database. It stores query results to specified storage (either shared memory or memcached). If same query (more precisely, SELECT query which has identical text) arrives, Pgpool-II returns the cached result without accessing the database. This is very fast because:

  • Query results are fetched from memory, rather than database's (typically on desk) storage
  • Does not use CPU or memory on database server

Is there any pitfall?

Whether a SELECT result is fetched from the database or cache is transparent to client. That is, you don't realize the distinction except the response speed. Ok, sounds great. But is there any pitfall to use the query cache? One is, cache hit rates. Since any modifications to underlying table clears the cache, on a system which involves frequent updates are not suitable for the feature. According to the manual, system with lower than 70% cache hit rates is recommended to use the query cache.

Row visibility rules

Another pitfall you may need to care about is row visibility rules. The rule defines how the table rows are visible to other sessions (so you don't need to worry about this if you are the only user in the database). PostgreSQL provides several transaction isolation levels. The row visibility rule will vary depending on the transaction isolation level used in the transaction. For example, with the read committed isolation level (which is the default), other users will not see the new data of rows updated by your transaction until it gets committed.

 

 However with repeatable read isolation level, other users will not see the new data of rows updated by your transaction even after the transaction gets committed. They are visible only after the user starts a new transaction.


Query cache could break the visibility rule

Pgpool-II's query cache does not follow the visibility rule. The cache is created at a transaction gets committed. The difference is, once cache is created, the cache entry can be seen by any transaction. This is fine with read committed isolation level. But with repeatable read isolation level, it breaks the visibility rule.

Suppose query cache feature is used in repeatable read transaction isolation level. This time, after UPDATE, SELECT is issued to create a cache entry.


 

Unlike previous figure, user's transaction will see the new data once your transaction gets committed because the new data will be registered when the transaction gets committed and the data can been seen by anyone. This could happen in not only repeatable read transaction isolation level, but in serializable transaction isolation level.

This is somewhat similar to the phenomenon you may see while using COPY FREEZE. Suppose the transaction isolation level is repeatable read and you are copying data to table using COPY FREEZE in a transaction. Other transaction which has started before your transaction started will see the copied data once your transaction gets committed. This is because "frozen" rows can be seen by anyone.

Conclusion

Pgpool-II's query cache is a useful feature for read intensive busy system. However this may cause certain read anomaly in repeatable read and serializable transaction isolation level. So please use the feature carefully with those transaction isolation level.

Tuesday, March 9, 2021

Speeding up pgbench using COPY FREEZE

 

Photo by Florian Kurz

 What is pgbench

Pgbench is a simple tool to perform performance tests on PostgreSQL. Since pgbench comes with PostgreSQL distribution and it's easy to use, many users and developers regularly use pgbench.  The original version of pgbench was written by me in 1999 and then it was contributed to PostgreSQL. Since then many people have enhanced the tool.

Loading data takes long time

Pgbench has a feature to load initial data into the benchmark tables (-i or --initialize option). By specifying scaling factor (-s option) users can create data as large as he/she wants. For example scaling factor 100 will create 10 millions rows. When I execute pgbench command with the scaling factor 100, it took 70 seconds on my laptop. You can see how long  each part of the process took:

 done in 70.78 s (drop tables 0.21 s, create tables 0.02 s, client-side generate 12.42 s, vacuum 51.11 s, primary keys 7.02 s).

  As you can see, most of the time was spent in vacuum: 51.11 seconds.

Saturday, February 6, 2021

What "pool" means in Pgpool-II?

 

Photo by Prachya Singhto

 Pgpool-II started its history as a connection pooler

The word "pool" in Pgpool-II had traditionally meant "connection pooling".  Actually the software was born as a connection pooling software in 2003. However after 18 years passed Pgpool-II is not only a connection pooling software anymore. Rather, the connection pooling feature is just one of rich features of Pgpool-II.

 More feature has been added to Pgpool-II

 In 2004 native replication feature (statement based replication) was added. In 2010 PostgreSQL's streaming replication can be managed. In 2012 Watchdog, a high availability feature for Pgpool-II itself, has been added.  Moreover automatic failover and load balancing are possible.

Now "pool" means "pool of PostgreSQL"

With all these feature now Pgpool-II is effectively a PostgreSQL cluster management software. So I decided to redefine the meaning of "pool" to be "pool of PostgreSQL servers". In this direction I have updated the intro chapter of PostgreSQL manual for the next release (Pgpool-II 4.3).

Pgpool-II manages a pool of PostgreSQL servers to achieve some features that are not available with single PostgreSQL installation.
An HTML compiled version of the chapter can be viewed here.

I hope this change gives people more appropriate view of this software.


Dynamic spare process management in Pgpool-II

Pre-fork architecture in Pgpool-II Pgpool-II uses fixed number of pre-forked child process which is responsible for accepting and handling e...