Photo by: pixundfertig
Release note and commit log of Pgpool-II 4.3
In this blog I would like to explain some of new features of the latest Pgpool-II major version 4.3, which was released in December 2021.
The official release note is here. Also, if you have the source code of Pgpool-II, you can find the commit id for each items in the release note source code (written in SGML) so that you could look into the commit log for further understanding of the change.
Executive summary of the changes
Here is an executive summary of the changes in 4.3.
- Configuration file format has been changed for easier configuration.
- More information is added to utility commands to make admin's life easier.
- Allow to keep quorum even if the watchdog cluster looses enough number of watchdog nodes.
- Allow to choose PostgreSQL node for promotion.
- Allow to choose the least replication delay node in load balancing.
- Catch up the latest SQL parser of PostgreSQL 14 as usual.
- Others.
I will explain one by one.
Configuration file format has been changed for easier configuration
Pgpool-II used to provide specific configuration file example for each clustering mode. For example, you need to start with pgpool.conf.sample-stream to configure the streaming replication mode. In 4.3, there's only one configuration example file. You just need to set "clustering_mode" parameter for the desired clustering mode to start configuration (of course you still need to add more information. For example, backend node information and assorted account information).
For this, all configuration parameters are commented out and the default values for each configuration parameters are carefully chosen so that users can leave them as they are (commented out). This is similar way to PostgreSQL's configuration file (postgresql.conf).
We advice users who plan to migrate to 4.3 from older version to inspect the default value changes. They are explained in the release note.
More information is added to utility commands to make admin's life easier
Some utility commands are enhanced so that more useful information is shown.
"show pool_nodes" and "pcp_node_info" now show actual PostgreSQL status.
test=# show pool_nodes;
node_id | hostname | port | status | pg_status | lb_weight | role | pg_role | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change
---------+----------+-------+--------+-----------+-----------+---------+---------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
0 | /tmp | 11002 | up | up | 0.333333 | primary | primary | 0 | true | 0 | | | 2022-01-31 10:10:35
1 | /tmp | 11003 | up | up | 0.333333 | standby | standby | 0 | false | 0 | streaming | async | 2022-01-31 10:10:35
2 | /tmp | 11004 | down | up | 0.333333 | standby | standby | 0 | false | 0 | streaming | async | 2022-01-31 10:18:47
(3 rows)
In the example, "pg_status" and "pg_role" are the new fields. You might notice that "status" field of node 2 is "up" while "pg_status" field is "up". This is because the node was detached by pcp_detach_node command. The command just changes the status recognized by Pgpool-II, but does not touch PostgreSQL backend on node 2. Before 4.3 admins have to additionally check PostgreSQL's status whether the backend node was actually down or not because the command just shows the node status as "down" in such cases.
These fields are added to pcp_node_info as well.
$ pcp_node_info -p 11001 -w -a -v
Hostname : /tmp
Port : 11002
Status : 1
Weight : 0.333333
Status Name : waiting
Backend Status Name : up
Role : primary
Backend Role : primary
Replication Delay : 0
Replication State : none
Replication Sync State : none
Last Status Change : 2022-01-31 10:54:26
Hostname : /tmp
Port : 11003
Status : 1
Weight : 0.333333
Status Name : waiting
Backend Status Name : up
Role : standby
Backend Role : standby
Replication Delay : 0
Replication State : streaming
Replication Sync State : async
Last Status Change : 2022-01-31 10:54:26
Hostname : /tmp
Port : 11004
Status : 3
Weight : 0.333333
Status Name : down
Backend Status Name : up
Role : standby
Backend Role : standby
Replication Delay : 0
Replication State : streaming
Replication Sync State : async
Last Status Change : 2022-01-31 10:54:38
pcp_proc_info has some new fields or new information.
"Start time" now has additional information like "(2:52 before process restarting)". This is only shown when "status" field is "Wait for connection", which means the pgpool process is completely idle and can accept new connection. In this case if child_life_time is non 0, the process will restart. The new information indicates how much time remains before the process restarts.
"Backend connection time" is the timestamp when the connection to backend is created. "Client connection time" is the timestamp when the last client connected. "Client idle duration" is the duration in seconds while the client is idle or idle in transaction. Remaining duration before the client gets disconnected due to client_idle_limit (in seconds) . "Client disconnection time" is the timestamp when the last client disconnected to the process. If the client still connects to the process (thus "status" is "Idle" or "Idle in transaction"), the field will be empty.
$ pcp_proc_info -p 11001 --verbose
Database : test
Username : postgres
Start time : 2021-09-28 04:16:00 (2:52 before process restarting)
Client connection count : 1
Major : 3
Minor : 0
Backend connection time : 2021-09-28 04:16:16
Client connection time : 2021-09-28 04:16:16
Client idle duration : 0
Client disconnection time : 2021-09-28 04:16:33
Pool Counter : 1
Backend PID : 30795
Connected : 0
PID : 30750
Backend ID : 0
Status : Wait for connection
Database : test
Username : postgres
Start time : 2021-09-28 04:16:00 (2:52 before process restarting)
Client connection count : 1
Major : 3
Minor : 0
Backend connection time : 2021-09-28 04:16:16
Client connection time : 2021-09-28 04:16:16
Client idle duration : 0
Client disconnection time : 2021-09-28 04:16:33
Pool Counter : 1
Backend PID : 30796
Connected : 0
PID : 30750
Backend ID : 1
Status : Wait for connection
Database : test
Username : kawamoto
Start time : 2021-09-28 04:16:00
Client connection count : 0
Major : 3
Minor : 0
Backend connection time : 2021-09-28 04:16:03
Client connection time : 2021-09-28 04:16:03
Client idle duration : 158 (2:22 before client disconnected)
Client disconnection time :
Pool Counter : 1
Backend PID : 30763
Connected : 1
PID : 30751
Backend ID : 0
Status : Idle
Database : test
Username : kawamoto
Start time : 2021-09-28 04:16:00
Client connection count : 0
Major : 3
Minor : 0
Backend connection time : 2021-09-28 04:16:03
Client connection time : 2021-09-28 04:16:03
Client idle duration : 158 (2:22 before client disconnected)
Client disconnection time :
Pool Counter : 1
Backend PID : 30764
Connected : 1
PID : 30751
Backend ID : 1
Status : Idle
To be continued...
This blog is already too long. I will continue from "Allow to keep quorum even if the watchdog cluster looses enough number of watchdog nodes." in my next blog entry.
No comments:
Post a Comment