PostgreSQL data同步工具【pg_rewind】



一、系统

IP

HOSTNAME

PG VERSION

DIR

OS

192.168.100.161

node1

9.4

/opt/pgsql

CentOS6.5_x64

192.168.100.162

node2

9.4

/opt/pgsql

CentOS6.5_x64

 

# cat /etc/issue

CentOS release6.5 (Final)

Kernel \r on an\m

 

# uname -a

Linux barman2.6.32-431.11.2.el6.x86_64 #1 SMP Tue Mar 25 19:59:55 UTC 2014 x86_64 x86_64x86_64 GNU/Linux

 

# cat /etc/hosts

127.0.0.1     localhost.localdomainlocalhost.localdomainlocalhost4    localhost4.localdomain4   localhost      node1

::1  localhost.localdomainlocalhost.localdomainlocalhost6    localhost6.localdomain6   localhost      node1

 

192.168.100.161node1

192.168.100.162node2

二、安装

2.1简介

pg_rewind is a tool for synchronizing aPostgreSQL data directory with another PostgreSQL data directory that wasforked from the first one. The result is equivalent to rsyncing the first datadirectory with the second one. The advantage of pg_rewind over rsync is thatpg_rewind uses the WAL to determine changed data blocks, and does not requirereading through all files in the cluster. That makes it a lot faster when thedatabase is large and only a small portion of it differs between the clusters.

 

注意:

pg_rewind同步数据不像rsync,一般情况下使用rsync做同步时需要扫描整个文件,当数据量不大时扫描时间相对来说可以忽略或接受,但是当数据量达几百G甚至上T时扫描将会耗时很长,为了改进目前的同步机制,因此在pg_rewind中不再对所有同步文件进行扫描,而是仅从waltimeline中获取相关信息,定位到哪些blocks需要同步并对blocks进行标记,之后再将需要同步的blocks进行拷贝。

pg_rewind要求能够扫描到需要的wal,若需要的wal不在pg_xlog中则去归档中查找(该功能尚未支持,目前需要手动将缺少的wal从归档中拷贝到pg_xlog

2.2要求

在使用pg_rewind时,需要开启以下数据库功能(21):

1data checksums

校验Pg数据页并标记侦测损坏的数据块。针对cluster设置。当然开启该功能会带来性能上的额外开销,因为对每个数据页都要有额外的计算,所以使用时要考虑全面权衡利弊。该功能在initdb时开启(-k),之后不能改变(不过有一个新参数ignore_checksum_failure可以强制Pg在检测到损坏数据的时候继续执行事务,但要保证数据块的头信息没有损坏)。

  1. wal_log_hints

    Pg9.4中的一个新特性。

     

    {本次测试选择第一种}

2.3安装pg

Pg源码下载地址:

https://github.com/postgres/postgres

 

选择最新版本:9.4devel

 

yum install flex bison readline-devel zlib-devel

安装过程略

 

注:node1初始化数据库,node2仅安装数据库软件。初始化时加入-k参数。

2.4编译安装pg_rewind

# unzip pg_rewind-master.zip

# mv pg_rewind-master postgres-master/contrib/

# cd postgres-master/contrib/pg_rewind-master/

# make && make install

 

也可通过pg系统管理用户编译安装,如:

# su - postgres

$ make USE_PGXS=1 top_srcdir=postgres-master

$ make USE_PGXS=1 top_srcdir=postgres-master install

 

验证是否安装正确:

# su - postgres

$ pg_rewind --version

pg_rewind 0.1

 

查看是否开启校验:

$ pg_controldata | grep checksum

Data pagechecksum version:          1

 

postgres=# showdata_checksums ;

 data_checksums

----------------

 on

(1 row)

 

三、配置hot standby

3.1配置主节点数据库

$ vi postgresql.conf

listen_addresses= ‘*‘

port = 5432

wal_level =hot_standby

max_wal_senders= 2

wal_keep_segments= 100  

hot_standby = on

 

$ vi pg_hba.conf

host   all            all            192.168.100.0/24       trust

host   replication    all            192.168.100.0/24        trust

 

启动数据库:

[postgres@node1 ~]$ pg_ctl restart

3.2设置备节点数据库

[postgres@node2 pgsql]$ pg_basebackup -D/opt/pgsql/data -h node1 -P

21099/21099 kB(100%), 1/1 tablespace

NOTICE: WAL archiving is not enabled; you must ensurethat all required WAL segments are copied through other means to complete thebackup

 

配置recovery.conf

[postgres@node2 data]$ cat recovery.conf

standby_mode =‘on‘

primary_conninfo= ‘host=node1 user=postgres port=5432‘

recovery_target_timeline= ‘latest‘

 

启动备库:

[postgres@node2 data]$ pg_ctl start

四、切换模拟

4.1模拟主库故障

[postgres@node1 ~]$ pg_ctl stop -m f

4.2提升备库状态

[postgres@node2 ~]$ pg_ctl promote

4.3更新数据

[postgres@node2 ~]$ createdb pgbench

[postgres@node2 ~]$ pgbench -i -s 10 pgbench

4.4node1恢复为standby

同步数据:

[postgres@node1 ~]$ pg_rewind -D /opt/pgsql/data/--source-server=‘host=node2 user=postgres port=5432‘

The serversdiverged at WAL position 0/30000C8 on timeline 1.

No rewindrequired.

 

[postgres@node1 ~]$ vi /opt/pgsql/data/recovery.conf

standby_mode =‘on‘

primary_conninfo= ‘host=node2 user=postgres port=5432‘

recovery_target_timeline= ‘latest‘

 

启动node1上的数据库:

[postgres@node1 ~]$ pg_ctl start

4.4node1恢复为master

停止node2

[postgres@node2 ~]$ pg_ctl stop -m f

 

提升node1状态:

[postgres@node1 ~]$ pg_ctl promote

 

node1上做一些更新:

[postgres@node1 ~]$ pgbench -s 10 -T 60 pgbench

 

恢复node2standby

[postgres@node2 ~]$ pg_rewind -D /opt/pgsql/data/--source-server=‘host=node1 user=postgres port=5432‘

The serversdiverged at WAL position 0/12ACCC30 on timeline 2.

No rewindrequired.

 

[postgres@node2 ~]$ mv /opt/pgsql/data/recovery.done/opt/pgsql/data/recovery.conf

[postgres@node2 ~]$ vi /opt/pgsql/data/recovery.conf

(修改node2node1

 

启动node2上的数据库:

[postgres@node2 ~]$ pg_ctl start

server starting

五、基本原理

The basic idea is to copy everything fromthe new cluster to the old cluster,

except for the blocks that we know to bethe same.

 

1. Scan the WAL log of the old cluster,starting from the last checkpoint before

the point where the new cluster‘s timelinehistory forked off from the old cluster.

For each WAL record, make a note of thedata blocks that were touched. This yields

a list of all the data blocks that werechanged in the old cluster, after the new

cluster forked off.

 

2. Copy all those changed blocks from thenew cluster to the old cluster.

 

3. Copy all other files like clog, conffiles etc. from the new cluster to old cluster.

Everything except the relation files.

 

4. Apply the WAL from the new cluster,starting from the checkpoint created at

failover. (pg_rewind doesn‘t actually apply the WAL, it just creates abackup

label file indicating that when PostgreSQLis started, it will start replay

from that checkpoint and apply all the requiredWAL)

 

六、参考文献

https://github.com/vmware/pg_rewind

http://michael.otacoo.com/postgresql-2/postgres-module-highlight-pg_rewind-to-recycle-a-postgres-master-into-a-slave/

七、license

pg_rewind can be distributed under theBSD-style PostgreSQL license

郑重声明:本站内容如果来自互联网及其他传播媒体,其版权均属原媒体及文章作者所有。转载目的在于传递更多信息及用于网络分享,并不代表本站赞同其观点和对其真实性负责,也不构成任何其他建议。