Sunday, December 27, 2020

Timeouts in Pgpool-II connections

 

There are several timeout parameters in Pgpool-II. In this blog I will explain those parameters used for connection management.  Below is a figure describing such a few parameters. Pgpool-II spawns child process (the number of process is defined by num_init_children parameter) at the startup time. Each child process takes care of client connections and backend connections to the PostgreSQL database.

First of all there are two connections. One is between clients and Pgpool-II. The other is between Pgpool-II and PostgreSQL. So you should recognize which parameter affects which connection.

Child_lifetime

This parameter controls the lifetime of Pgpool-II process. Since connections to/from clients and PostgreSQL are in the process, all connections will disappear when the process exits. A benefit to enable the parameter is to free out memories used by the Pgpool-II process. For example, Pgpool-II needs to allocate memory to hold message data sent from client. That could be large if table row is wide. Of course the memory used by the message is freed after the communication is done. But it does not actually free out memory from process space. So occasional process exit and restart may be good for applications which could handle large messages. Note that when Pgpool-II process exits, corresponding PostgreSQL also exits.

The timer for the parameter starts when no connections from clients exist. For a busy system the timer may not be triggered because next client connection arrives as soon as former client exits. In this case you may want to use child_max_connections parameter.

If you want to know how old the process is, you can use pcp_proc_info command or show pool_processes SQL command to examine the "start time" field.

Child_max_connections

If the number of connection count from clients exceeds this parameter, then the Pgpool-II process exits and restarts. Even if your system is too busy to trigger the child_lifetime, this parameter should work because it only counts the number of connections from clients.

Unfortunately currently there's no metrics to know how many connections from client have been made to a Pgpool-II process.

Client_idle_limit

If a client does not send next query for a period of this parameter, the connection to the client is disconnected. Note that the timer starts after previous query receives the result from PostgreSQL. Therefore bellow query does not trigger the disconnection even if client_idle_limit = 10 (seconds) until the client receives the result.

SELECT pg_sleep(60);

Connection_life_time

Unlike the parameters above this controls the lifetime of cached connections to PostgreSQL backend. Note that Pgpool-II keeps up to max_pool connections to PostgreSQL backend. The timer is checked against those connections to PostgreSQL backend when the Pgpool-II process returns to the idle state (waiting for connections from clients). So if a client occupies the connection for long time, the cached connections remain longer than the period specified by the parameter.

If you want to know how old the connection caches are, you can use pcp_proc_info command or show pool_processes SQL command to examine the "Creation time" field.

Conclusion

You can control the connection lifetime parameters to save resource of the system. For a systems which are not so busy, you can leave those parameters as default. However if your system getting busy you might want to tweak them. Hope this blog entry is useful for busy systems.

Wednesday, December 2, 2020

Using IVM in a docker container

Real time, fast update of materialized views

 In  this blog entry I have introduced the ongoing project: Incremental View Maintenance (IVM), which allows to real time, fast update of materialized views. Since the bolg, IVM now can handle outer joins and self joins. The out come of the project is in public in a form of set of patches against git master branch of PostgreSQL. Also a GitHub repository for this project is in public as well.

Docker image for IVM

So skilled PostgreSQL developers can test IVM today. However it's not that easy for people who do not regularly compile and build PostgreSQL and IVM from source code. Fortunately one of IVM developers "Yugo Nagata" comes up and starts to provide complete docker image of IVM. I am going to demonstrate how to use it step by step.

# Pull docker image  
docker pull yugonagata/postgresql-ivm 
 
 # Run docker container using port mapping 15432:5432 
docker run -d --name pgsql-ivm -e POSTGRES_PASSWORD=postgres -p 15432:5432 yugonagata/postgresql-ivm
 
 # Run psql to connect to PostgreSQL to create database named "test"
docker run -it --rm --link pgsql-ivm yugonagata/postgresql-ivm psql -h pgsql-ivm -U postgres 
# create database test; 
\q

# Run psql to run pgbench
docker run -it --rm --link pgsql-ivm yugonagata/postgresql-ivm pgbench -i -h pgsql-ivm -U postgres test

# Run psql to try IVM
docker run -it --rm --link pgsql-ivm yugonagata/postgresql-ivm psql -h pgsql-ivm -U postgres test

# Create a materialized view "mv1" for query "select count(*) from pgbench_accounts.

# Note that "pgbench_accounts" was created by pgbench
test=# create incremental materialized view mv1 as select count(*) from pgbench_accounts;
SELECT 1

# select the result from the view
test=# select * from mv1;
 count  
--------
 100000
(1 row)

 # Start a transaction

 test=# begin;
BEGIN

# delete 1 row from pgbench_accounts table
test=*# delete from pgbench_accounts where aid = 1;
DELETE 1

# Check to see if mv1 reflects the deletion
test=*# select * from mv1;
 count
-------
 99999
(1 row)

# Abort the transaction
test=*# abort;
ROLLBACK

# The deletion was cancelled in mv1 as well
test=# select * from mv1;
 count  
--------
 100000
(1 row)

Summary

IVM has been continuously developed. Now it can be easily tested in a docker container. Please try it out!

 

Wednesday, September 30, 2020

Fixing language in Pgpool-II 4.2

 The Challenge

 Upcoming Pgpool-II uses more appropriate language for some technical terms. This has been already done in PostgreSQL. Pgpool-II developers think it's a good thing and decided to follow the way. Actual work has been done by Umar Hayat. Challenge is we have to fix not only documentations but configuration parameters and programs. As a result, his patch was quite large: 159 files have been changed, over 2,000 lines were modified.

 Avoiding Some Words

We avoid some words including "master", "slave", "black", "white". We use instead "main", "leader", "primary", "replica" "read", "write" and so on.

Configuration Parameters are Changed

Here is the list of changed configuration parameters.

black_function_list → write_function_list

white_function_list → read_function_list

black_query_pattern → primary_routing_query_pattern

black_memcache_table_list → cache_unsafe_table_list

white_memcache_table_list → cache_safe_table_list

ALWAYS_MASTER flag → ALWAYS_PRIMARY flag

follow_master_command → follow_primary_command

Native replication mode

Pgpool-II has a mode which allows replication by sending identical SQL statement to PostgreSQL servers (thus no streaming replication configuration is neccessary). This mode is called "native replication mode". In this mode "master" (which is the first PostgreSQL node) and "slave" are used to represent the roll of each PostgreSQL server. Pgpool-II 4.2 will use "main" and "replica" instead.

Watchdog

Watchdog is the feature to avoid Pgpool-II's single point of failure. When deploying multiple Pgpool-II servers, they talk each other so that they can elect new leader server when former leader server goes down. We historically called the leader server as "master" and rest of the server as "standby". Now Pgpool-II uses "leader" instead of "master".

Conclusion

The language cleanup may make users migration process from older version of Pgpool-II a little bit harder. But this should have been done someday, and Pgpool-II developers are proud of doing that in this release.

See preview of 4.2 release note for more details (we expect to have Pgpool-II 4.2 release in October 2020).


Sunday, August 30, 2020

New statistics data in Pgpool-II

 

Upcoming Pgpool-II 4.2 will provide new statistics data such as number of INSERT/DELETE/UPDATE etc.  to you. In this blog I will show you how it will look like.

Existing statistics

Before jumping into the new feature, let's take a look at existing statistics data. PostgreSQL already gives you tuple access data like number of tuples inserted in a database (see pg_stat_database manual). Before Pgpoool-II 4.1 or before gives number of SELECT sent to PostgreSQL in "show pool_nodes" command.

How is like the new statistics?

A new command called "show backend_stats" provides the statistics data like number of NSERT, UPDATE and DELETE issued since Pgpool-II started up. So it's not like PostgreSQL's pg_stat_database in that PostgreSQL provides number of tuples accessed, while Pgpool-II provides number SQL commands issued.  If you want number of tuples based statistics you can always access PostgresQL's statistics data.

 Here is an example of the new statistics data:

test=# show pool_backend_stats;
 node_id | hostname | port  | status |  role   | select_cnt | insert_cnt | update_cnt | delete_cnt | ddl_cnt | other_cnt | panic_cnt | fatal_cnt | error_cnt 
---------+----------+-------+--------+---------+------------+------------+------------+------------+---------+-----------+-----------+-----------+-----------
 0       | /tmp     | 11002 | up     | primary | 12         | 10         | 30         | 0          | 2       | 30        | 0         | 0         | 1
 1       | /tmp     | 11003 | up     | standby | 12         | 0          | 0          | 0          | 0       | 23        | 0         | 0         | 1
(2 rows) 

select_cnt, insert_cnt, update_cnt, delete_cnt, ddl_cnt, other_cnt are the numbers of SQL command issued since Pgpool-II started up. What is "other_cnt"? Well, it is a count of: CHECKPOINT/DEALLOCATE/DISCARD/EXECUTE/EXPLAIN/LISTEN/LOAD/LOCK/NOTIFY/PREPARE/SET/SHOW/Transaction commands/UNLISTEN. If an SQL does not belong to SELECT/INSERT/UPDATE/DELETE/other, then it is counted as DDL.

Please note that these counts are counted up even if an SQL command fails or roll backed.

panic_cnt, fatal_cnt and error_cnt are number of errors returned from PostgreSQL.  They are counts of PANIC, FATAL or ERROR. These error statistics is handy because PostgreSQL does not provide error statistics.

Summary

The "show pool_backend_stats" gives you at a glance view of whole cluster statistics on number SQL commands issued. Moreover, error counts, which does not exists in PostgreSQL could be useful for those who are interested in database or application failures.

Saturday, July 25, 2020

Snapshot Isolation Mode

Pgpool-II 4.2 is under development

Pgpool-II developers have been working hard for upcoming Pgpool-II 4.2 release, which is supposed to be released around this September or October.

Snapshot isolation mode is coming

For the new release I have just added new clustering mode called "Snapshot isolation mode", which would be not only interesting for Pgpool-II users but PostgreSQL users and I think I decided to write about it in this blog.

Existing clustering mode

Until Pgpool-II 4.1, we have following clustering mode:
  • Streaming replication mode. This is most widely used at this point. PostgreSQL's streaming replication is used for the replication task in this mode.
  • Native replication mode. Until streaming replication is available, this was the most common setting. In this mode Pgpool-II issues same data modifying statements (DML/DDL) to all the PostgreSQL servers. As a result same data is replicated in all PostgreSQL servers.
  • Slony mode. In this mode replication is done by Slony-I instead of streaming replication. This had been popular until streaming replication came up.
  • Raw mode. No replication is done in this mode. Rarely used.
The new "Snapshot isolation mode" is pretty close to the existing native replication mode. It does not use the streaming replication at all. Replication is done by sending DML/DDL statements to all the PostgreSQL servers.  If so, what's the difference?

Atomic visibility

Before explaining that I would like to talk about the background a little bit.

PostgreSQL developers has been discussing about technology which can scale to handle large data. One of such a technical directions is extending FDW (Foreign Data Wrapper). It sends query to another PostgreSQL server to replicate or shard data. The idea is similar to the native replication mode in Pgpool-II because it also sends DML/DDL to servers to replicate data.
As for replicating data, they look good so far. However, there is a problem when multiple sessions try to access data at the same time.

Suppose we have 2 sessions: session 1 continuously executes UPDATE t1 SET i = i +1; session 2 continuously executes INSERT INTO log SELECT * FROM t1. Table t1 has only 1 row and it's initial value is 0 (actually each UPDATE and INSERT are executed in an explicit transaction).

So after session 1 and session 2 end, table "log" should have exactly same sequential numbers like "0, 1, 2, 3...." on both PostgreSQL servers. But see the example session below:



Session 1 executed UPDATE at t1 on servers1, while on server 2 UPDATE was executed on t2. Session 2 executed INSERT at t3 with value 1. Unfortunately on server2 INSERT was executed at t3, and t1 was not updated yet thus the value used was 0.

So after the sessions end, we will see different sequences of numbers in log table on server1 and server2. If we do not have atomic visibility this kind of data inconsistency could happen. Atomic visibility guarantees that value can be seen on server1 and server2 are same as long as they are executed in a same session. In the example above, data logged into log table would be either 0 (t1 has not been updated/committed yet) or 1 (t1 has been already updated/committed).

Snapshot isolation mode guarantees atomic visibility

The upcoming snapshot isolation mode is different from existing native replication mode in that it guarantees atomic visibility. So we don't need to worry about such that data inconsistency in the example above. In fact we have new regression to test the new snapshot isolation mode by using essentially same SQL used in the example above. Please note that in order to use snapshot isolation mode, the transaction isolation mode must be "REPEATABLE READ". (Postgres FDW also runs foreign transactions in this mode).

How to achieve atomic visibility?

PostgreSQL developers already recognize the necessity of atomic visibility. Postgres-XC, a fork of PostgreSQL, invented Global Transaction Manager to achieve atomic visibility. Some PostgreSQL developers are proposing CSN (Commit Sequence Number) approach. However these approch needs major surgery in PostgreSQL code.

Pgpool-II uses completely different approach, which is called "Pangea" proposed in an academic paper. With Pangea, no modification is necessary with PostgreSQL. Even you can use older versions of PostgreSQL like 8.0. So you can enjoy snapshot isolation mode of Pgpool-II today with your favorite version of PostgreSQL.

In Pagea the idea is called Global Snapshot Isolation, which extends snapshot isolation of local server to across servers. That's the reason why we call the new clustering mode as "Snapshot Isolation mode".

How Pangea works?

 With Pangea each first command in a transaction needs to wait if other transactions are trying to commit. If there are no commit is ongoing, the command will get snapshot which defines the visibility of the transaction. Since local transactions on each server acquire snapshots while there are no committing transactions, they can be regarded as consistent. In the example above, the INSERT command in session 2 will be forced to wait until t4, and at that time session 1 commits. Thus each INSERT command in session 2 will see "1" in table t1. Same thing can be said to the commit. Commit must wait if there's ongoing snapshot acquisition.

Summary

Pgpool-II 4.1 will come with new clustering mode called "Snapshot Isolation Mode" which guarantees the atomic visibility and is not provided in current standard PostgreSQL. Unlike other techniques to implement the atomic visibility, Pgpool-II does not require any modification to PostgreSQL. Also it can be used even with older version of PostgreSQL. Stay tuned.

Dynamic spare process management in Pgpool-II

Pre-fork architecture in Pgpool-II Pgpool-II uses fixed number of pre-forked child process which is responsible for accepting and handling e...