MySQL 8.0 MySQL MGR集群

来自Linux78|wiki

简介

MySQL Group Replication(简称MGR)字面意思是mysql组复制的意思,但其实他是一个高可用的集群架构,暂时只支持mysql5.7和mysql8.0版本.

是MySQL官方于2016年12月推出的一个全新的高可用与高扩展的解决方案,提供了高可用、高扩展、高可靠的MySQL集群服务.

也是mysql官方基于组复制概念并充分参考MariaDB Galera Cluster和Percona XtraDB Cluster结合而来的新的高可用集群架构.

MySQL Group Replication是建立在基于Paxos的XCom之上的,正因为有了XCom基础设施,保证数据库状态机在节点间的事务一致性,才能在理论和实践中保证数据库系统在不同节点间的事务一致性。

由一般主从复制概念扩展,多个节点共同组成一个数据库集群,事务的提交必须经过半数以上节点同意方可提交,在集群中每个节点上都维护一个数据库状态机,保证节点间事务的一致性。

优点:

   高一致性,基于原生复制及paxos协议的组复制技术.
   高容错性,有自动检测机制,当出现宕机后,会自动剔除问题节点,其他节点可以正常使用(类似zk集群),当不同节点产生资源争用冲突时,会按照先到先得处理,并且内置了自动化脑裂防护机制.
   高扩展性,可随时在线新增和移除节点,会自动同步所有节点上状态,直到新节点和其他节点保持一致,自动维护新的组信息.
   高灵活性,直接插件形式安装(5.7.17后自带.so插件),有单主模式和多主模式,单主模式下,只有主库可以读写,其他从库会加上super_read_only状态,只能读取不可写入,出现故障会自动选主.

缺点:

   还是太新,不太稳定,暂时性能还略差于PXC,对网络稳定性要求很高,至少是同机房做.


安装

1.服务环境设定规划

ip地址

mysql版本 数据库端口号 Server-ID MGR端口号 操作系统 10.0.2.5 mysql 8.0.11 3308 258011 33081 Ubuntu 17.04 10.0.2.6 mysql 8.0.11 3308 268011 33081 Ubuntu 17.04 10.0.2.7 mysql 8.0.11 3308 278011 33081 Ubuntu 17.04 多主模式下最好有三台以上的节点,单主模式则视实际情况而定,不过同个Group最多节点数为9.服务器配置尽量保持一致,因为和PXC一样,也会有"木桶短板效应".

需要特别注意,mysql数据库的服务端口号和MGR的服务端口不是一回事,需要区分开来.

而server-id要区分开来是必须的,单纯做主从复制也要满足这一点了.


2.安装部署

怎么安装mysql8.0就不多说了,本系列第一篇已经说过了,所以默认就当装好了.

直接就说怎么安装MGR了,上面也说了,MGR在mysql5.7.17版本之后就都是自带插件了,只是没有安装上而已,和半同步插件一个套路,所以默认是没有选项.

所有集群内的服务器都必须安装MGR插件才能正常使用该功能.

我们可以看到,一开始是没有装的

mysql> show plugins; +----------------------------+----------+--------------------+----------------------+---------+ | Name | Status | Type | Library | License | +----------------------------+----------+--------------------+----------------------+---------+ | binlog | ACTIVE | STORAGE ENGINE | NULL | GPL | | mysql_native_password | ACTIVE | AUTHENTICATION | NULL | GPL | | sha256_password | ACTIVE | AUTHENTICATION | NULL | GPL | | caching_sha2_password | ACTIVE | AUTHENTICATION | NULL | GPL | | sha2_cache_cleaner | ACTIVE | AUDIT | NULL | GPL | | PERFORMANCE_SCHEMA | ACTIVE | STORAGE ENGINE | NULL | GPL | | MRG_MYISAM | ACTIVE | STORAGE ENGINE | NULL | GPL | | MEMORY | ACTIVE | STORAGE ENGINE | NULL | GPL | | TempTable | ACTIVE | STORAGE ENGINE | NULL | GPL | | InnoDB | ACTIVE | STORAGE ENGINE | NULL | GPL | | INNODB_TRX | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_CMP | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_CMP_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_CMPMEM | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_CMPMEM_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_CMP_PER_INDEX | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_CMP_PER_INDEX_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_BUFFER_PAGE | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_BUFFER_PAGE_LRU | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_BUFFER_POOL_STATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_TEMP_TABLE_INFO | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_METRICS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_FT_DEFAULT_STOPWORD | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_FT_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_FT_BEING_DELETED | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_FT_CONFIG | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_FT_INDEX_CACHE | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_FT_INDEX_TABLE | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_TABLES | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_TABLESTATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_INDEXES | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_TABLESPACES | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_COLUMNS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_VIRTUAL | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_CACHED_INDEXES | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | CSV | ACTIVE | STORAGE ENGINE | NULL | GPL | | MyISAM | ACTIVE | STORAGE ENGINE | NULL | GPL | | ARCHIVE | ACTIVE | STORAGE ENGINE | NULL | GPL | | BLACKHOLE | ACTIVE | STORAGE ENGINE | NULL | GPL | | FEDERATED | DISABLED | STORAGE ENGINE | NULL | GPL | | ngram | ACTIVE | FTPARSER | NULL | GPL | | mysqlx | ACTIVE | DAEMON | NULL | GPL | | mysqlx_cache_cleaner | ACTIVE | AUDIT | NULL | GPL | +----------------------------+----------+--------------------+----------------------+---------+ MGR相关参数也是没有加载的,只有一个其他相关的参数

mysql> show variables like 'group%'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | group_concat_max_len | 1024 | +----------------------+-------+ 1 row in set 然后,先看看当前插件的目录

mysql> show variables like 'plugin_dir'; +---------------+--------------------------------+ | Variable_name | Value | +---------------+--------------------------------+ | plugin_dir | /usr/local/mysql80/lib/plugin/ | +---------------+--------------------------------+ 1 row in set (0.00 sec) 再搜索一下我们需要的MGR插件,是否存在

ll /usr/local/mysql80/lib/plugin/ |grep group_replication -rwxr-xr-x 1 7161 31415 21947376 Apr 8 16:16 group_replication.so* 最后,从新进入mysql服务,进行安装

mysql>install PLUGIN group_replication SONAME 'group_replication.so'; 这个时候,就有了

mysql> show plugins; +----------------------------+----------+--------------------+----------------------+---------+ | Name | Status | Type | Library | License | +----------------------------+----------+--------------------+----------------------+---------+

   .
   .
   .

| group_replication | ACTIVE | GROUP REPLICATION | group_replication.so | GPL | +----------------------------+----------+--------------------+----------------------+---------+ 再去看MGR相关的参数,就有很多了

mysql> show variables like 'group%'; +-----------------------------------------------------+---------------------------------------------------------------------+ | Variable_name | Value | +-----------------------------------------------------+---------------------------------------------------------------------+ | group_concat_max_len | 1024 | | group_replication_allow_local_lower_version_join | OFF | | group_replication_auto_increment_increment | 7 | | group_replication_bootstrap_group | OFF | | group_replication_communication_debug_options | GCS_DEBUG_NONE | | group_replication_components_stop_timeout | 31536000 | | group_replication_compression_threshold | 1000000 | | group_replication_enforce_update_everywhere_checks | ON | | group_replication_flow_control_applier_threshold | 25000 | | group_replication_flow_control_certifier_threshold | 25000 | | group_replication_flow_control_hold_percent | 10 | | group_replication_flow_control_max_quota | 0 | | group_replication_flow_control_member_quota_percent | 0 | | group_replication_flow_control_min_quota | 0 | | group_replication_flow_control_min_recovery_quota | 0 | | group_replication_flow_control_mode | QUOTA | | group_replication_flow_control_period | 1 | | group_replication_flow_control_release_percent | 50 | | group_replication_force_members | | | group_replication_group_name | cc5e2627-2285-451f-86e6-0be21581539f | | group_replication_group_seeds | 10.0.2.5:33081,10.0.2.6:33081,10.0.2.7:33081 | | group_replication_gtid_assignment_block_size | 1000000 | | group_replication_ip_whitelist | 127.0.0.1/32,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,192.168.1.0/24 | | group_replication_local_address | 10.0.2.6:33081 | | group_replication_member_weight | 50 | | group_replication_poll_spin_loops | 0 | | group_replication_recovery_complete_at | TRANSACTIONS_APPLIED | | group_replication_recovery_get_public_key | OFF | | group_replication_recovery_public_key_path | | | group_replication_recovery_reconnect_interval | 60 | | group_replication_recovery_retry_count | 10 | | group_replication_recovery_ssl_ca | | | group_replication_recovery_ssl_capath | | | group_replication_recovery_ssl_cert | | | group_replication_recovery_ssl_cipher | | | group_replication_recovery_ssl_crl | | | group_replication_recovery_ssl_crlpath | | | group_replication_recovery_ssl_key | | | group_replication_recovery_ssl_verify_server_cert | OFF | | group_replication_recovery_use_ssl | OFF | | group_replication_single_primary_mode | OFF | | group_replication_ssl_mode | DISABLED | | group_replication_start_on_boot | OFF | | group_replication_transaction_size_limit | 150000000 | | group_replication_unreachable_majority_timeout | 0 | +-----------------------------------------------------+---------------------------------------------------------------------+ 45 rows in set (0.00 sec) 上面有些配置是我预先配置好的,后面会详细解析.


3.配置MGR环境

熟悉mysql的人都知道,mysql支持set global的全局在线配置方式,所以并不局限于配置文件,这里直接解析参数和给出命令.

假设我们先写到配置文件my.cnf:

首先,MGR是一定要用GTID的,所以,GTID就必须要开,新版本的mysql可以在线切换,但是建议直接重启生效吧,方便快捷,这个各位要注意一下,

  1. 开启GTID,必须开启

gtid_mode=on

  1. 强制GTID的一致性

enforce-gtid-consistency=on 然后,列举一些公共参数的修改

  1. binlog格式,MGR要求必须是ROW,不过就算不是MGR,也最好用row

binlog_format=row

  1. server-id必须是唯一的

server-id = 258011

  1. MGR使用乐观锁,所以官网建议隔离级别是RC,减少锁粒度

transaction_isolation = READ-COMMITTED

  1. 因为集群会在故障恢复时互相检查binlog的数据,
  2. 所以需要记录下集群内其他服务器发过来已经执行过的binlog,按GTID来区分是否执行过.

log-slave-updates=1

  1. binlog校验规则,5.6之后的高版本是CRC32,低版本都是NONE,但是MGR要求使用NONE

binlog_checksum=NONE

  1. 基于安全的考虑,MGR集群要求复制模式要改成slave记录记录到表中,不然就报错

master_info_repository=TABLE

  1. 同上配套

relay_log_info_repository=TABLE 最后就是MGR自身的独有配置参数了.

  1. 记录事务的算法,官网建议设置该参数使用 XXHASH64 算法

transaction_write_set_extraction = XXHASH64

  1. 相当于此GROUP的名字,是UUID值,不能和集群内其他GTID值的UUID混用,可用uuidgen来生成一个新的,
  2. 主要是用来区分整个内网里边的各个不同的GROUP,而且也是这个group内的GTID值的UUID

loose-group_replication_group_name = 'cc5e2627-2285-451f-86e6-0be21581539f'

  1. IP地址白名单,默认只添加127.0.0.1,不会允许来自外部主机的连接,按需安全设置

loose-group_replication_ip_whitelist = '127.0.0.1/8,192.168.1.0/24,10.0.0.0/8,10.18.89.49/22'

  1. 是否随服务器启动而自动启动组复制,不建议直接启动,怕故障恢复时有扰乱数据准确性的特殊情况

loose-group_replication_start_on_boot = OFF

  1. 本地MGR的IP地址和端口,host:port,是MGR的端口,不是数据库的端口

loose-group_replication_local_address = '10.0.2.5:33081'

  1. 需要接受本MGR实例控制的服务器IP地址和端口,是MGR的端口,不是数据库的端口

loose-group_replication_group_seeds = '10.0.2.5:33081,10.0.2.6:33081,10.0.2.7:33081'

  1. 开启引导模式,添加组成员,用于第一次搭建MGR或重建MGR的时候使用,只需要在集群内的其中一台开启,

loose-group_replication_bootstrap_group = OFF

  1. 是否启动单主模式,如果启动,则本实例是主库,提供读写,其他实例仅提供读,如果为off就是多主模式了

loose-group_replication_single_primary_mode = off

  1. 多主模式下,强制检查每一个实例是否允许该操作,如果不是多主,可以关闭

loose-group_replication_enforce_update_everywhere_checks = on 重点来解析几个参数:

group_replication_group_name: 这个必须是独立的UUID值,不能和集群里面其他的数据库的GTID的UUID值一样,在linux系统下可以用uuidgen来生成一个新的UUID

group_replication_ip_whitelist: 关于IP白名单来说,本来是安全设置,如果全内网涵盖是不太适合的,我这样设置只是为了方便,这个参数可以set global动态修改,还是比较方便的

group_replication_start_on_boot: 不建议随系统启动的原因有两个,第一个就是怕故障恢复时的极端情况下影响数据准确性,第二个就是怕一些添加或移除节点的操作被这个参数影响到

group_replication_local_address: 特别注意的是这个端口并不是数据库服务端口,是MGR的服务端口,而且要保证这个端口没有被使用,是MGR互相通信使用的端口.

group_replication_group_seeds: 接受本group控制的IP地址和端口号,这个端口也是MGR的服务端口,可以用set global动态修改,用以添加和移动节点.

group_replication_bootstrap_group: 需要特别注意,引导的服务器只需要一台,所以集群内其他服务器都不需要开启这个参数,默认off就好了,有需要再set global来开启就足够了.

group_replication_single_primary_mode: 取决于想用的是多主模式还是单主模式,如果是单主模式,就类似于半同步复制,但是比半同步要求更高,因为需要集群内过半数的服务器写入成功后,主库才会返回写入成功,数据一致性也更高,通常金融服务也更推荐这种使用方法.如果是多主模式,看上去性能更高,但是事务冲突的几率也更高,虽然MGR内部有先到先得原则,但是这些还是不能忽略,对于高并发环境,更加可能是致命的,所以一般多主模式也是建议分开来使用,一个地址链接一个库,从逻辑操作上区分开来,避免冲突的可能. group_replication_enforce_update_everywhere_checks: 如果是单主模式,因为不存在多主同时操作的可能,这个强制检查是可以关闭,因为已经不存在这样的操作,多主是必须要开的,不开的话数据就可能出现错乱了.

如果用set global方式动态开启的话就如下了:

set global transaction_write_set_extraction='XXHASH64'; set global group_replication_start_on_boot=OFF; set global group_replication_bootstrap_group = OFF ; set global group_replication_group_name= 'cc5e2627-2285-451f-86e6-0be21581539f'; set global group_replication_local_address='10.0.2.5:33081'; set global group_replication_group_seeds='10.0.2.5:33081,10.0.2.6:33081,10.0.2.7:33081'; set global group_replication_ip_whitelist = '127.0.0.1/8,192.168.1.0/24,10.0.0.1/8,10.18.89.49/22'; set global group_replication_single_primary_mode=off; set global group_replication_enforce_update_everywhere_checks=on; 需要特别注意的是,同一集群group内的数据库服务器的配置,都必须保持一致,不然是会报错的,或者是造成一些奇葩事情.当然了,server-id和本机的IP地址端口要注意区分.

配置好了,就可以准备启动了,但是启动有顺序要求,需要特别注意.


4.启动MGR集群

就如上面说的,启动MGR是要注意顺序的,因为需要有其中一台数据库做引导,其他数据库才可以顺利加入进来.

如果是单主模式,那么主库就一定要先启动并做引导,不然就不是主了.

当出现异常时,应该要去查看mysql报错文件mysql.err,一般都有相应的error日志提示.

好了,转回正题,现在假设用10.0.2.6这台服务器做引导,先登进本地mysql服务端:

  1. 启动引导,注意,只有这套开启引导,其他两台都请忽略这一步

mysql> SET GLOBAL group_replication_bootstrap_group=ON;

  1. 创建一个用户来做同步的用户,并授权,所有集群内的服务器都需要做

mysql> create user 'sroot'@'%' identified by '123123'; mysql> grant REPLICATION SLAVE on *.* to 'sroot'@'%' with grant option;

  1. 清空所有旧的GTID信息,避免冲突

mysql> reset master;

  1. 创建同步规则认证信息,就是刚才授权的那个用户,和一般的主从规则写法不太一样

mysql> CHANGE MASTER TO MASTER_USER='sroot', MASTER_PASSWORD='123123' FOR CHANNEL 'group_replication_recovery';

  1. 启动MGR

mysql> start group_replication;

  1. 查看是否启动成功,看到online就是成功了

mysql> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | a29a1b91-4908-11e8-848b-08002778eea7 | ubuntu | 3308 | ONLINE | PRIMARY | 8.0.11 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 1 row in set (0.02 sec)

  1. 这个时候,就可以先关闭引导了

mysql> SET GLOBAL group_replication_bootstrap_group=OFF; 然后,就到另外两台服务器10.0.2.5和10.0.2.7了,也是要登进本地mysql服务端:

  1. 不需要启动引导了,下面大致是类似的
  2. 用户授权还是要做的

mysql> create user 'sroot'@'%' identified by '123123'; mysql> grant REPLICATION SLAVE on *.* to 'sroot'@'%' with grant option;

  1. 清空所有旧的GTID信息,避免冲突

mysql> reset master;

  1. 创建同步规则认证信息,就是刚才授权的那个用户,和一般的主从规则写法不太一样

mysql> CHANGE MASTER TO MASTER_USER='sroot', MASTER_PASSWORD='123123' FOR CHANNEL 'group_replication_recovery';

  1. 启动MGR

mysql> start group_replication;

  1. 查看是否启动成功,看到online就是成功了

mysql> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | a29a1b91-4908-11e8-848b-08002778eea7 | ubuntu | 3308 | ONLINE | PRIMARY | 8.0.11 | | group_replication_applier | d058176a-51cf-11e8-8c95-080027e7b723 | ubuntu | 3308 | ONLINE | PRIMARY | 8.0.11 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 2 rows in set (0.00 sec) 如此类推,在10.0.2.7上就应该是下面这样了

mysql> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | a29a1b91-4908-11e8-848b-08002778eea7 | ubuntu | 3308 | ONLINE | PRIMARY | 8.0.11 | | group_replication_applier | af892b6e-49ca-11e8-9c9e-080027b04376 | ubuntu | 3308 | ONLINE | PRIMARY | 8.0.11 | | group_replication_applier | d058176a-51cf-11e8-8c95-080027e7b723 | ubuntu | 3308 | ONLINE | PRIMARY | 8.0.11 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec) 看到MEMBER_STATE全部都是online就是成功连接上了,不过如果出现故障,是会被剔除出集群的并且在本机上会显示error,这个时候就需要去看本机的mysql报错文件mysql.err了.

需要注意的是,现在是多主模式,MEMBER_ROLE里显示的都是PRIMARY,如果是单主模式,就会只显示一个PRIMARY,其他是SECONDARY了.



使用

在多主模式下,下面这些连接方式都是能直接读写的

mysql -usroot -p123123 -h10.0.2.5 -P3308 mysql -usroot -p123123 -h10.0.2.6 -P3308 mysql -usroot -p123123 -h10.0.2.7 -P3308 怎么操作我就不说了,和以前的mysql一样create,insert,delete一样,你就看到其他服务器也会有数据了.

如果是单主的话,那么就只有PRIMARY状态的主库可以写数据,SECONDARY状态的只能读不能写,例如下面这样

mysql> select * from ttt; +----+--------+ | id | name | +----+--------+ | 1 | ggg | | 2 | ffff | | 3 | hhhhh | | 4 | tyyyyy | | 5 | aaaaaa | +----+--------+ 5 rows in set (0.00 sec) mysql> delete from ttt where id = 5; ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement 这些操作相关就不详细展开了,搭好了就可以慢慢试.


管理维护

为了验证我上面说过的东西,先看看当前的GTID和从库状态

  1. 查一下GTID,就是之前设的那个group的uuid

mysql> show master status; +------------------+----------+--------------+------------------+---------------------------------------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +------------------+----------+--------------+------------------+---------------------------------------------------+ | mysql-bin.000003 | 4801 | | | cc5e2627-2285-451f-86e6-0be21581539f:1-23:1000003 | +------------------+----------+--------------+------------------+---------------------------------------------------+ 1 row in set (0.00 sec)

  1. 再看从库状态,没有数据,因为根本不是主从结构

mysql> show slave status; Empty set (0.00 sec) 上面看到了一条命令,是查当前节点信息的,下面慢慢列举一些常用的命令

  1. 查看group内所有成员的节点信息

mysql> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | a29a1b91-4908-11e8-848b-08002778eea7 | ubuntu | 3308 | ONLINE | PRIMARY | 8.0.11 | | group_replication_applier | af892b6e-49ca-11e8-9c9e-080027b04376 | ubuntu | 3308 | ONLINE | SECONDARY | 8.0.11 | | group_replication_applier | d058176a-51cf-11e8-8c95-080027e7b723 | ubuntu | 3308 | ONLINE | SECONDARY | 8.0.11 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec)

  1. 查看GROUP中的同步情况,当前复制状态

mysql> select * from performance_schema.replication_group_member_stats\G

                                                      • 1. row ***************************
                             CHANNEL_NAME: group_replication_applier
                                  VIEW_ID: 15258529121778212:5
                                MEMBER_ID: a29a1b91-4908-11e8-848b-08002778eea7
              COUNT_TRANSACTIONS_IN_QUEUE: 0
               COUNT_TRANSACTIONS_CHECKED: 9
                 COUNT_CONFLICTS_DETECTED: 0
       COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
       TRANSACTIONS_COMMITTED_ALL_MEMBERS: cc5e2627-2285-451f-86e6-0be21581539f:1-23:1000003
           LAST_CONFLICT_FREE_TRANSACTION: cc5e2627-2285-451f-86e6-0be21581539f:23

COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0

        COUNT_TRANSACTIONS_REMOTE_APPLIED: 3
        COUNT_TRANSACTIONS_LOCAL_PROPOSED: 9
        COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0
                                                      • 2. row ***************************
                             CHANNEL_NAME: group_replication_applier
                                  VIEW_ID: 15258529121778212:5
                                MEMBER_ID: af892b6e-49ca-11e8-9c9e-080027b04376
              COUNT_TRANSACTIONS_IN_QUEUE: 0
               COUNT_TRANSACTIONS_CHECKED: 9
                 COUNT_CONFLICTS_DETECTED: 0
       COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
       TRANSACTIONS_COMMITTED_ALL_MEMBERS: cc5e2627-2285-451f-86e6-0be21581539f:1-23:1000003
           LAST_CONFLICT_FREE_TRANSACTION: cc5e2627-2285-451f-86e6-0be21581539f:23

COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0

        COUNT_TRANSACTIONS_REMOTE_APPLIED: 10
        COUNT_TRANSACTIONS_LOCAL_PROPOSED: 0
        COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0
                                                      • 3. row ***************************
                             CHANNEL_NAME: group_replication_applier
                                  VIEW_ID: 15258529121778212:5
                                MEMBER_ID: d058176a-51cf-11e8-8c95-080027e7b723
              COUNT_TRANSACTIONS_IN_QUEUE: 0
               COUNT_TRANSACTIONS_CHECKED: 9
                 COUNT_CONFLICTS_DETECTED: 0
       COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
       TRANSACTIONS_COMMITTED_ALL_MEMBERS: cc5e2627-2285-451f-86e6-0be21581539f:1-23:1000003
           LAST_CONFLICT_FREE_TRANSACTION: cc5e2627-2285-451f-86e6-0be21581539f:23

COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0

        COUNT_TRANSACTIONS_REMOTE_APPLIED: 9
        COUNT_TRANSACTIONS_LOCAL_PROPOSED: 0
        COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0

3 rows in set (0.00 sec)

  1. 当前server中各个通道的使用情况,

mysql> select * from performance_schema.replication_connection_status\G

                                                      • 1. row ***************************
                                     CHANNEL_NAME: group_replication_applier
                                       GROUP_NAME: cc5e2627-2285-451f-86e6-0be21581539f
                                      SOURCE_UUID: cc5e2627-2285-451f-86e6-0be21581539f
                                        THREAD_ID: NULL
                                    SERVICE_STATE: ON
                        COUNT_RECEIVED_HEARTBEATS: 0
                         LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00.000000
                         RECEIVED_TRANSACTION_SET: cc5e2627-2285-451f-86e6-0be21581539f:1-23:1000003
                                LAST_ERROR_NUMBER: 0
                               LAST_ERROR_MESSAGE: 
                             LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000
                          LAST_QUEUED_TRANSACTION: cc5e2627-2285-451f-86e6-0be21581539f:23
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 2018-05-09 16:38:08.035692

LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000

    LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2018-05-09 16:38:08.031639
      LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2018-05-09 16:38:08.031753
                             QUEUEING_TRANSACTION: 
   QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
  QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
       QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
                                                      • 2. row ***************************
                                     CHANNEL_NAME: group_replication_recovery
                                       GROUP_NAME: 
                                      SOURCE_UUID: 
                                        THREAD_ID: NULL
                                    SERVICE_STATE: OFF
                        COUNT_RECEIVED_HEARTBEATS: 0
                         LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00.000000
                         RECEIVED_TRANSACTION_SET: 
                                LAST_ERROR_NUMBER: 0
                               LAST_ERROR_MESSAGE: 
                             LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00.000000
                          LAST_QUEUED_TRANSACTION: 
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000

LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000

    LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
      LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000
                             QUEUEING_TRANSACTION: 
   QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
  QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00.000000
       QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00.000000

2 rows in set (0.00 sec)

  1. 当前server中各个通道是否启用,on是启用

mysql> select * from performance_schema.replication_applier_status; +----------------------------+---------------+-----------------+----------------------------+ | CHANNEL_NAME | SERVICE_STATE | REMAINING_DELAY | COUNT_TRANSACTIONS_RETRIES | +----------------------------+---------------+-----------------+----------------------------+ | group_replication_applier | ON | NULL | 0 | | group_replication_recovery | OFF | NULL | 0 | +----------------------------+---------------+-----------------+----------------------------+ 2 rows in set (0.00 sec)

  1. 单主模式下,查看那个是主库,只显示uuid值

mysql> select * from performance_schema.global_status where VARIABLE_NAME='group_replication_primary_member'; +----------------------------------+--------------------------------------+ | VARIABLE_NAME | VARIABLE_VALUE | +----------------------------------+--------------------------------------+ | group_replication_primary_member | a29a1b91-4908-11e8-848b-08002778eea7 | +----------------------------------+--------------------------------------+ 1 row in set (0.00 sec) 例如下面这个例子

mysql> show global variables like 'server_uuid'; +---------------+--------------------------------------+ | Variable_name | Value | +---------------+--------------------------------------+ | server_uuid | af892b6e-49ca-11e8-9c9e-080027b04376 | +---------------+--------------------------------------+ 1 row in set (0.00 sec)

mysql> show global variables like 'super%'; +-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | super_read_only | ON | +-----------------+-------+ 1 row in set (0.00 sec) 好明显,这台不是主库,super_read_only都开启了.