1 / 152 - Percona · MySQL InnoDB Cluster & Group Replication in a Nutshell: Hands-On Tutorial...

Post on 24-Jul-2020

1 views 0 download

Transcript of 1 / 152 - Percona · MySQL InnoDB Cluster & Group Replication in a Nutshell: Hands-On Tutorial...

1 / 152

2 / 152

MySQL InnoDB Cluster & Group Replication in aNutshell: Hands-On Tutorial 

Percona Live Europe 2017 - Dublin

Frédéric Descamps - MySQL Community Manager - OracleKenny Gryp - MySQL Practice Manager - Percona

3 / 152

 

Safe Harbor StatementThe following is intended to outline our general product direction. It is intended forinformation purpose only, and may not be incorporated into any contract. It is not acommitment to deliver any material, code, or functionality, and should not be relied up inmaking purchasing decisions. The development, release and timing of any features orfunctionality described for Oracle´s product remains at the sole discretion of Oracle.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

4 / 152

Who are we ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

5 / 152

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

6 / 152

Frédéric Descamps@lefredMySQL EvangelistManaging MySQL since 3.23devops believerhttp://about.me/lefred

 

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

7 / 152

Kenny Gryp@grypMySQL Practice Manager

 

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

8 / 152

get more at the conference

MySQL Group Replication

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

9 / 152

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

10 / 152

AgendaPrepare your workstation

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

11 / 152

AgendaPrepare your workstationWhat are MySQL InnoDB Cluster & Group Replication ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

12 / 152

AgendaPrepare your workstationWhat are MySQL InnoDB Cluster & Group Replication ?Migration from Master-Slave to GR

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

13 / 152

AgendaPrepare your workstationWhat are MySQL InnoDB Cluster & Group Replication ?Migration from Master-Slave to GRHow to monitor ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

14 / 152

AgendaPrepare your workstationWhat are MySQL InnoDB Cluster & Group Replication ?Migration from Master-Slave to GRHow to monitor ?Application interaction

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

15 / 152

VirtualBox

Setup your workstation

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

16 / 152

Setup your workstationInstall VirtualBox 5On the USB key, copy PLeu17_GR.ova on your laptop and doubleclick on itEnsure you have vboxnet2 network interface (VirtualBox Preferences -> Network -> Host-Only Networks -> +)Start all virtual machines (mysql1, mysql2, mysql3 & mysql4)Install putty if you are using Windows

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

17 / 152

Setup your workstationInstall VirtualBox 5On the USB key, copy PLeu17_GR.ova on your laptop and doubleclick on itEnsure you have vboxnet2 network interface (VirtualBox Preferences -> Network -> Host-Only Networks -> +)Start all virtual machines (mysql1, mysql2, mysql3 & mysql4)Install putty if you are using WindowsTry to connect to all VM´s from your terminal or putty (root password is X) :

ssh -p 8821 root@127.0.0.1 to mysql1ssh -p 8822 root@127.0.0.1 to mysql2ssh -p 8823 root@127.0.0.1 to mysql3ssh -p 8824 root@127.0.0.1 to mysql4

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

18 / 152

LAB1: Current situation

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

19 / 152

launchrun_app.sh

on mysql1 intoa screensessionverify thatmysql2 is arunning slave

LAB1: Current situation

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

20 / 152

Summary 

+--------+----------+--------------+-----------------+| | ROLE | SSH PORT | INTERNAL IP |+--------+----------+--------------+-----------------+ | | | | | | mysql1 | master | 8821 | 192.168.56.11 | | | | | | | mysql2 | slave | 8822 | 192.168.56.12 | | | | | | | mysql3 | n/a | 8823 | 192.168.56.13 | | | | | | | mysql4 | n/a | 8824 | 192.168.56.14 | | | | | | +--------+----------+--------------+-----------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

21 / 152

Easy High Availability

MySQL InnoDB Cluster

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

22 / 152

InnoDB

cluster

Ease-of-Use

Extreme Scale-Out

Out-of-Box Solution

Built-in HA

High Performance

Everything Integrated

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

23 / 152

InnoDB Cluster's Architecture

A

p

p

l

i

c

a

t

i

o

n

M

y

S

Q

L

C

o

n

n

e

c

t

o

r

M

y

S

Q

L

R

o

u

t

e

r

M

y

S

Q

L

S

h

e

l

l

I

n

n

o

D

B

c

l

u

s

t

e

r

A

p

p

l

i

c

a

t

i

o

n

M

y

S

Q

L

C

o

n

n

e

c

t

o

r

M

y

S

Q

L

R

o

u

t

e

r

M

p

M

M

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

24 / 152

Group Replication: heart of MySQL InnoDBCluster

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

25 / 152

Group Replication: heart of MySQL InnoDBCluster

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

26 / 152

MySQL Group Replication

but what is it ?!?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

27 / 152

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQL

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

28 / 152

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQLGR is an implementation of Replicated Database State Machine theory

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

29 / 152

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQLGR is an implementation of Replicated Database State Machine theoryPaxos based protocol

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

30 / 152

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQLGR is an implementation of Replicated Database State Machine theoryPaxos based protocolGR allows to write on all Group Members (cluster nodes) simultaneously whileretaining consistency

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

31 / 152

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQLGR is an implementation of Replicated Database State Machine theoryPaxos based protocolGR allows to write on all Group Members (cluster nodes) simultaneously whileretaining consistencyGR implements conflict detection and resolution

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

32 / 152

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQLGR is an implementation of Replicated Database State Machine theoryPaxos based protocolGR allows to write on all Group Members (cluster nodes) simultaneously whileretaining consistencyGR implements conflict detection and resolutionGR allows automatic distributed recovery

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

33 / 152

MySQL Group Replication

but what is it ?!?

GR is a plugin for MySQL, made by MySQL and packaged with MySQLGR is an implementation of Replicated Database State Machine theoryPaxos based protocolGR allows to write on all Group Members (cluster nodes) simultaneously whileretaining consistencyGR implements conflict detection and resolutionGR allows automatic distributed recoverySupported on all MySQL platforms !!

Linux, Windows, Solaris, OSX, FreeBSD

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

34 / 152

And for users ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

35 / 152

And for users ?not longer necessary to handle server fail-over manually or with a complicated script

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

36 / 152

And for users ?not longer necessary to handle server fail-over manually or with a complicated scriptGR provides fault tolerance

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

37 / 152

And for users ?not longer necessary to handle server fail-over manually or with a complicated scriptGR provides fault toleranceGR enables update-everywhere setups

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

38 / 152

And for users ?not longer necessary to handle server fail-over manually or with a complicated scriptGR provides fault toleranceGR enables update-everywhere setupsGR handles crashes, failures, re-connects automatically

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

39 / 152

And for users ?not longer necessary to handle server fail-over manually or with a complicated scriptGR provides fault toleranceGR enables update-everywhere setupsGR handles crashes, failures, re-connects automaticallyAllows an easy setup of a highly available MySQL service!

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

40 / 152

ready ?

Migration from Master-Slave to GR

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

41 / 152

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

42 / 152

1) We install andsetup MySQL InnoDBCluster on one of thenew servers

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

43 / 152

2) We restore abackup

3) setupasynchronousreplication on the newserver.

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

44 / 152

4) We add a newinstance to our group

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

45 / 152

5) We point theapplication to one ofour new nodes.

6) We wait and checkthat asynchronousreplication is caughtup

7) we stop thoseasynchronous slaves

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

46 / 152

8) We attach themysql2 slave to thegroup

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

47 / 152

9) Use MySQL Routerfor directing traffic

The plan

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

48 / 152

Latest MySQL 8.0.3-RC is already installedon mysql3.

Let´s take a backup on mysql1:

[mysql1 ~]# xtrabackup --backup \ --target-dir=/tmp/backup \ --user=root \ --password=X --host=127.0.0.1

[mysql1 ~]# xtrabackup --prepare \ --target-dir=/tmp/backup

LAB2: Prepare mysql3Asynchronous slave

49 / 152

LAB2: Prepare mysql3 (2)Asynchronous slave

Copy the backup from mysql1 to mysql3:

[mysql1 ~]# scp -r /tmp/backup mysql3:/tmp

And restore it:

[mysql3 ~]# systemctl stop mysqld[mysql3 ~]# rm -rf /var/lib/mysql/*[mysql3 ~]# xtrabackup --copy-back --target-dir=/tmp/backup [mysql3 ~]# chown -R mysql. /var/lib/mysql

50 / 152

LAB3: mysql3 as asynchronous slave (2)Asynchronous slave

Configure /etc/my.cnf with the minimal requirements:

[mysqld]...server_id=3enforce_gtid_consistency = ongtid_mode = on#log_bin # new default #log_slave_updates # new default

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

51 / 152

LAB2: Prepare mysql3 (3)Asynchronous slave

Let´s start MySQL on mysql3:

[mysql3 ~]# systemctl start mysqld

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

52 / 152

LAB2: Prepare mysql3 (3)Asynchronous slave

Let´s start MySQL on mysql3:

[mysql3 ~]# systemctl start mysqld

[mysql3 ~]# mysql_upgrade

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

53 / 152

find the GTIDs purgedchange MASTERset the purged GTIDsstart replication

LAB3: mysql3 as asynchronous slave (1) 

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

54 / 152

LAB3: mysql3 as asynchronous slave (2)Find the latest purged GTIDs:

[mysql3 ~]# cat /tmp/backup/xtrabackup_binlog_info mysql-bin.000002 167646328 b346474c-8601-11e6-9b39-08002718d305:1-771

Connect to mysql3 and setup replication:

mysql> CHANGE MASTER TO MASTER_HOST="mysql1", MASTER_USER="repl_async", MASTER_PASSWORD='Xslave', MASTER_AUTO_POSITION=1;

mysql> RESET MASTER;mysql> SET global gtid_purged="VALUE FOUND PREVIOUSLY";

mysql> START SLAVE;

Check that you receive the application´s traffic

55 / 152

Administration made easy and more...

MySQL-Shell

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

56 / 152

MySQL ShellThe MySQL Shell is an interactive Javascript, Python, or SQL interface supportingdevelopment and administration for MySQL. MySQL Shell includes the AdminAPI--availablein JavaScript and Python--which enables you to set up and manage InnoDB clusters. Itprovides a modern and fluent API which hides the complexity associated with configuring,provisioning, and managing an InnoDB cluster, without sacrificing power, flexibility, orsecurity.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

57 / 152

MySQL Shell (2)As example. the same operations as before but using the Shell:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

58 / 152

MySQL Shell (3)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

59 / 152

LAB4: MySQL InnoDB ClusterCreate a single instance cluster

Time to use the new MySQL Shell !

[mysql3 ~]# mysqlsh

Let´s verify if our server is ready to become a member of a new cluster:

mysql-js> dba.checkInstanceCon�guration('root@mysql3:3306')

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

60 / 152

LAB4: MySQL InnoDB ClusterCreate a single instance cluster

Time to use the new MySQL Shell !

[mysql3 ~]# mysqlsh

Let´s verify if our server is ready to become a member of a new cluster:

mysql-js> dba.checkInstanceCon�guration('root@mysql3:3306')

Change the configuration !

mysql-js> dba.con�gureLocalInstance()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

61 / 152

LAB4: MySQL InnoDB Cluster (2)Restart mysqld to use the new configuration:

[mysql3 ~]# systemctl restart mysqld

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

62 / 152

LAB4: MySQL InnoDB Cluster (2)Restart mysqld to use the new configuration:

[mysql3 ~]# systemctl restart mysqld

Create a single instance cluster

[mysql3 ~]# mysqlsh

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

63 / 152

LAB4: MySQL InnoDB Cluster (2)Restart mysqld to use the new configuration:

[mysql3 ~]# systemctl restart mysqld

Create a single instance cluster

[mysql3 ~]# mysqlsh

mysql-js> dba.checkInstanceCon�guration('root@mysql3:3306')

mysql-js> \c root@mysql3:3306

mysql-js> cluster = dba.createCluster('perconalive')

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

64 / 152

LAB4: Cluster Statusmysql-js> cluster.status(){ "clusterName": "perconalive", "defaultReplicaSet": { "name": "default", "primary": "mysql3:3306", "ssl": "DISABLED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "mysql3:3306": { "address": "mysql3:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }}

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

65 / 152

Add mysql4 to the Group:

restore the backupset the purged GTIDsuse MySQL Shell

LAB5: add mysql4 to the cluster (1) 

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

66 / 152

[mysql4 ~]# systemctl start mysqld

[mysql4 ~]# mysql_upgrade

LAB5: add mysql4 to the cluster (2)Copy the backup from mysql1 to mysql4:

[mysql1 ~]# scp -r /tmp/backup mysql4:/tmp

And restore it:

[mysql4 ~] systemctl stop mysqld[mysql4 ~] rm -rf /var/lib/mysql/*[mysql4 ~]# xtrabackup --copy-back --target-dir=/tmp/backup [mysql4 ~]# chown -R mysql. /var/lib/mysql

Start MySQL on mysql4:

67 / 152

LAB5: MySQL Shell to add an instance (3)[mysql4 ~]# mysqlsh

Let´s verify the config:

mysql-js> dba.checkInstanceCon�guration('root@mysql4:3306')

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

68 / 152

LAB5: MySQL Shell to add an instance (3)[mysql4 ~]# mysqlsh

Let´s verify the config:

mysql-js> dba.checkInstanceCon�guration('root@mysql4:3306')

And change the configuration:

mysql-js> dba.con�gureLocalInstance()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

69 / 152

LAB5: MySQL Shell to add an instance (3)[mysql4 ~]# mysqlsh

Let´s verify the config:

mysql-js> dba.checkInstanceCon�guration('root@mysql4:3306')

And change the configuration:

mysql-js> dba.con�gureLocalInstance()

Restart the service to enable the changes:

[mysql4 ~]# systemctl restart mysqld

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

70 / 152

LAB5: MySQL InnoDB Cluster (4)Group of 2 instances

Find the latest purged GTIDs:

[mysql4 ~]# cat /tmp/backup/xtrabackup_binlog_info mysql-bin.000002 167646328 b346474c-8601-11e6-9b39-08002718d305:1-77177

Connect to mysql4 and set GTID_PURGED

[mysql4 ~]# mysqlsh

mysql-js> \c root@mysql4:3306mysql-js> \sqlmysql-sql> RESET MASTER;mysql-sql> SET global gtid_purged="VALUE FOUND PREVIOUSLY";

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

71 / 152

LAB5: MySQL InnoDB Cluster (5)mysql-sql> \js

mysql-js> dba.checkInstanceCon�guration('root@mysql4:3306')

mysql-js> \c root@mysql3:3306

mysql-js> cluster = dba.getCluster()

mysql-js> cluster.checkInstanceState('root@mysql4:3306')

mysql-js> cluster.addInstance("root@mysql4:3306")

mysql-js> cluster.status()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

72 / 152

Cluster Statusmysql-js> cluster.status(){ "clusterName": "perconalive", "defaultReplicaSet": { "name": "default", "primary": "mysql3:3306", "ssl": "DISABLED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active" "topology": { "mysql3:3306": { "address": "mysql3:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql4:3306": { "address": "mysql4:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "RECOVERING" } } } Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

73 / 152

Recovering progressOn standard MySQL, monitor the group_replication_recovery channel to seethe progress:

mysql4> show slave status for channel 'group_replication_recovery'\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: mysql3 Master_User: mysql_innodb_cluster_rpl_user ... Slave_IO_Running: Yes Slave_SQL_Running: Yes ... Retrieved_Gtid_Set: 6e7d7848-860f-11e6-92e4-08002718d305:1-6,7c1f0c2d-860d-11e6-9df7-08002718d305:1-15,b346474c-8601-11e6-9b39-08002718d305:1964-77177,e8c524df-860d-11e6-9df7-08002718d305:1-2 Executed_Gtid_Set: 7c1f0c2d-860d-11e6-9df7-08002718d305:1-7,b346474c-8601-11e6-9b39-08002718d305:1-45408,e8c524df-860d-11e6-9df7-08002718d305:1-2 ...

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

74 / 152

point the applicationto the cluster

Migrate the application

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

75 / 152

LAB6: Migrate the applicationMake sure gtid_executed range on mysql2 is lower or equal than on mysql3

mysql[2-3]> show global variables like 'gtid_executed'\G

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

76 / 152

LAB6: Migrate the applicationMake sure gtid_executed range on mysql2 is lower or equal than on mysql3

mysql[2-3]> show global variables like 'gtid_executed'\G

When they are OK, stop asynchronous replication on mysql2 and mysql3:

mysql2> stop slave;mysql3> stop slave;

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

77 / 152

LAB6: Migrate the applicationNow we need to point the application to mysql3, this is the only downtime !

...[ 21257s] threads: 4, tps: 12.00, reads: 167.94, writes: 47.98, response time: 18[ 21258s] threads: 4, tps: 6.00, reads: 83.96, writes: 23.99, response time: 14[ 21259s] threads: 4, tps: 7.00, reads: 98.05, writes: 28.01, response time: 16[ 31250s] threads: 4, tps: 8.00, reads: 111.95, writes: 31.99, response time: 30[ 31251s] threads: 4, tps: 11.00, reads: 154.01, writes: 44.00, response time: 13[ 31252s] threads: 4, tps: 11.00, reads: 153.94, writes: 43.98, response time: 12[ 31253s] threads: 4, tps: 10.01, reads: 140.07, writes: 40.02, response time: 17^C[mysql1 ~]# run_app.sh mysql3

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

78 / 152

LAB6: Migrate the applicationNow we need to point the application to mysql3, this is the only downtime !

...[ 21257s] threads: 4, tps: 12.00, reads: 167.94, writes: 47.98, response time: 18[ 21258s] threads: 4, tps: 6.00, reads: 83.96, writes: 23.99, response time: 14[ 21259s] threads: 4, tps: 7.00, reads: 98.05, writes: 28.01, response time: 16[ 31250s] threads: 4, tps: 8.00, reads: 111.95, writes: 31.99, response time: 30[ 31251s] threads: 4, tps: 11.00, reads: 154.01, writes: 44.00, response time: 13[ 31252s] threads: 4, tps: 11.00, reads: 153.94, writes: 43.98, response time: 12[ 31253s] threads: 4, tps: 10.01, reads: 140.07, writes: 40.02, response time: 17^C[mysql1 ~]# run_app.sh mysql3

Now they can forget about mysql1:

mysql[2-3]> reset slave all;

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

79 / 152

previous slave(mysql2) can nowbe part of the cluster

Add a third instance

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

80 / 152

LAB7: Add mysql2 to the groupWe first upgrade to MySQL 8.0.3 :

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

81 / 152

LAB7: Add mysql2 to the groupWe first upgrade to MySQL 8.0.3 :

[mysql2 ~]# systemctl stop mysqld[mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm[mysql2 ~]# systemctl start mysqld[mysql2 ~]# mysql_upgrade

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

82 / 152

LAB7: Add mysql2 to the groupWe first upgrade to MySQL 8.0.3 :

[mysql2 ~]# systemctl stop mysqld[mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm[mysql2 ~]# systemctl start mysqld[mysql2 ~]# mysql_upgrade

and then we validate the instance using MySQL Shell and we configure it:

[mysql2 ~]# mysqlsh

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

83 / 152

LAB7: Add mysql2 to the groupWe first upgrade to MySQL 8.0.3 :

[mysql2 ~]# systemctl stop mysqld[mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm[mysql2 ~]# systemctl start mysqld[mysql2 ~]# mysql_upgrade

and then we validate the instance using MySQL Shell and we configure it:

[mysql2 ~]# mysqlsh

mysql-js> dba.checkInstanceCon�guration('root@mysql2:3306')

mysql-js> dba.con�gureLocalInstance()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

84 / 152

LAB7: Add mysql2 to the groupWe first upgrade to MySQL 8.0.3 :

[mysql2 ~]# systemctl stop mysqld[mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm[mysql2 ~]# systemctl start mysqld[mysql2 ~]# mysql_upgrade

and then we validate the instance using MySQL Shell and we configure it:

[mysql2 ~]# mysqlsh

mysql-js> dba.checkInstanceCon�guration('root@mysql2:3306')

mysql-js> dba.con�gureLocalInstance()

[mysql2 ~]# systemctl restart mysqld

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

85 / 152

LAB7: Add mysql2 to the group (2)Back in MySQL Shell we add the new instance:

[mysql2 ~]# mysqlsh

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

86 / 152

LAB7: Add mysql2 to the group (2)Back in MySQL Shell we add the new instance:

[mysql2 ~]# mysqlsh

mysql-js> dba.checkInstanceCon�guration('root@mysql2:3306')

mysql-js> \c root@mysql3:3306

mysql-js> cluster = dba.getCluster()

mysql-js> cluster.addInstance("root@mysql2:3306")

mysql-js> cluster.status()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

87 / 152

LAB7: Add mysql2 to the group (3){ "clusterName": "perconalive", "defaultReplicaSet": { "name": "default", "primary": "mysql3:3306", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "mysql2:3306": { "address": "mysql2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql3:3306": { "address": "mysql3:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql4:3306": { "address": "mysql4:3306", "mode": "R/O", "readReplicas": {}, Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

88 / 152

writing to a single server

Single Primary Mode

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

89 / 152

Default = Single Primary ModeBy default, MySQL InnoDB Cluster enables Single Primary Mode.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

90 / 152

Default = Single Primary ModeBy default, MySQL InnoDB Cluster enables Single Primary Mode.

mysql> show global variables like 'group_replication_single_primary_mode';+---------------------------------------+-------+| Variable_name | Value |+---------------------------------------+-------+| group_replication_single_primary_mode | ON |+---------------------------------------+-------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

91 / 152

Default = Single Primary ModeBy default, MySQL InnoDB Cluster enables Single Primary Mode.

mysql> show global variables like 'group_replication_single_primary_mode';+---------------------------------------+-------+| Variable_name | Value |+---------------------------------------+-------+| group_replication_single_primary_mode | ON |+---------------------------------------+-------+

In Single Primary Mode, a single member acts as the writable master (PRIMARY) and therest of the members act as hot-standbys (SECONDARY).

The group itself coordinates and configures itself automatically to determine whichmember will act as the PRIMARY, through a leader election mechanism.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

92 / 152

Who´s the Primary Master ? old fashion styleAs the Primary Master is elected, all nodes part of the group knows which one waselected. This value is exposed in status variables:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

93 / 152

Who´s the Primary Master ? old fashion styleAs the Primary Master is elected, all nodes part of the group knows which one waselected. This value is exposed in status variables:

mysql> show status like 'group_replication_primary_member';+----------------------------------+--------------------------------------+| Variable_name | Value |+----------------------------------+--------------------------------------+| group_replication_primary_member | 28a4e51f-860e-11e6-bdc4-08002718d305 |+----------------------------------+--------------------------------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

94 / 152

Who´s the Primary Master ? old fashion styleAs the Primary Master is elected, all nodes part of the group knows which one waselected. This value is exposed in status variables:

mysql> show status like 'group_replication_primary_member';+----------------------------------+--------------------------------------+| Variable_name | Value |+----------------------------------+--------------------------------------+| group_replication_primary_member | 28a4e51f-860e-11e6-bdc4-08002718d305 |+----------------------------------+--------------------------------------+

mysql> select member_host as "primary master" from performance_schema.global_status join performance_schema.replication_group_members where variable_name = 'group_replication_primary_member' and member_id=variable_value;+---------------+| primary master|+---------------+| mysql3 |+---------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

95 / 152

Who´s the Primary Master ? new fashion stylemysql> select member_host from performance_schema.replication_group_members where member_role='PRIMARY';+-------------+| member_host |+-------------+| mysql3 |+-------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

96 / 152

Create a Multi-Primary Cluster:It´s also possible to create a Multi-Primary Cluster using the Shell:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

97 / 152

Create a Multi-Primary Cluster:It´s also possible to create a Multi-Primary Cluster using the Shell:

mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

98 / 152

Create a Multi-Primary Cluster:It´s also possible to create a Multi-Primary Cluster using the Shell:

mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})

A new InnoDB cluster will be created on instance 'root@mysql3:3306'.

The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode.Before continuing you have to con�rm that you understand the requirements andlimitations of Multi-Master Mode. Please read the manual before proceeding.

I have read the MySQL InnoDB cluster manual and I understand the requirementsand limitations of advanced Multi-Master Mode.Con�rm [y|N]:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

99 / 152

Create a Multi-Primary Cluster:It´s also possible to create a Multi-Primary Cluster using the Shell:

mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})

A new InnoDB cluster will be created on instance 'root@mysql3:3306'.

The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode.Before continuing you have to con�rm that you understand the requirements andlimitations of Multi-Master Mode. Please read the manual before proceeding.

I have read the MySQL InnoDB cluster manual and I understand the requirementsand limitations of advanced Multi-Master Mode.Con�rm [y|N]:

Or you can force it to avoid interaction (for automation) :

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

100 / 152

Create a Multi-Primary Cluster:It´s also possible to create a Multi-Primary Cluster using the Shell:

mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})

A new InnoDB cluster will be created on instance 'root@mysql3:3306'.

The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode.Before continuing you have to con�rm that you understand the requirements andlimitations of Multi-Master Mode. Please read the manual before proceeding.

I have read the MySQL InnoDB cluster manual and I understand the requirementsand limitations of advanced Multi-Master Mode.Con�rm [y|N]:

Or you can force it to avoid interaction (for automation) :

> cluster=dba.createCluster('perconalive',{multiMaster: true, force: true})

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

101 / 152

get more info

Monitoring

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

102 / 152

Performance SchemaGroup Replication uses Performance_Schema to expose status

mysql3> SELECT * FROM performance_schema.replication_group_members\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: ade14d5c-9e1e-11e7-b034-08002718d305 MEMBER_HOST: mysql4 MEMBER_PORT: 3306 MEMBER_STATE: ONLINE MEMBER_ROLE: SECONDARYMEMBER_VERSION: 8.0.3 *************************** 2. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: b9d01593-9dfb-11e7-8ca6-08002718d305 MEMBER_HOST: mysql3 MEMBER_PORT: 3306 MEMBER_STATE: ONLINE MEMBER_ROLE: PRIMARYMEMBER_VERSION: 8.0.3

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

103 / 152

mysql3> SELECT * FROM performance_schema.replication_connection_status\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier GROUP_NAME: 8fc848d7-9e1c-11e7-9407... SOURCE_UUID: 8fc848d7-9e1c-11e7-9407... THREAD_ID: NULL SERVICE_STATE: ON COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00 RECEIVED_TRANSACTION_SET: 8fc848d7-9e1c-11e7-9407...b9d01593-9dfb-11e7-8ca6-08002718d305:1-21,da2f0910-8767-11e6-b82d-08002718d305:1-164741 LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION: 8fc848d7-9e1c-11e7-9407... LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2017-09-20 16:22:36.486... LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2017-09-20 16:22:36.486... QUEUEING_TRANSACTION: QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

104 / 152

Member StateThese are the different possible state for a node member:

ONLINE

OFFLINE

RECOVERING

ERROR: when a node is leaving but the plugin was not instructed to stopUNREACHABLE

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

105 / 152

Status information & metrics

Membersmysql> SELECT member_host, member_state, member_role FROM performance_schema.replication_group_members;

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

106 / 152

Status information & metrics

Membersmysql> SELECT member_host, member_state, member_role FROM performance_schema.replication_group_members;

+-------------+--------------+-------------+| member_host | member_state | member_role |+-------------+--------------+-------------+| mysql4 | ONLINE | SECONDARY || mysql3 | ONLINE | PRIMARY |+-------------+--------------+-------------+2 rows in set (0.00 sec)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

107 / 152

Status information & metrics - connectionsmysql> SELECT * FROM performance_schema.replication_connection_status\G

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

108 / 152

Status information & metrics - connectionsmysql> SELECT * FROM performance_schema.replication_connection_status\G

*************************** 1. row *************************** CHANNEL_NAME: group_replication_applier GROUP_NAME: 8fc848d7-9e1c-11e7-9407-... SOURCE_UUID: 8fc848d7-9e1c-11e7-9407-... THREAD_ID: NULL SERVICE_STATE: ON COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00 RECEIVED_TRANSACTION_SET: 8fc848d7-9e1c-11e7-9407-...b9d01593-9dfb-11e7-8ca6-08002718d305:1-21,da2f0910-8767-11e6-b82d-08002718d305:1-164741 LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION: 8fc848d7-9e1c-11e7-9407-... LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2017-09-20 16:22:36.4864... LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2017-09-20 16:22:36.4865... QUEUEING_TRANSACTION: QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

109 / 152

Status information & metrics

Previously there was only local node statistics, now they are exposed allover the Group mysql> select * from performance_schema.replication_group_member_stats\G

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

110 / 152

Status information & metrics

Previously there was only local node statistics, now they are exposed allover the Group mysql> select * from performance_schema.replication_group_member_stats\G

************************** 1. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 15059231192196925:2 MEMBER_ID: ade14d5c-9e1e-11e7-b034-08002... COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 27992 COUNT_CONFLICTS_DETECTED: 0 COUNT_TRANSACTIONS_ROWS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: 8fc848d7-9e1c-11e7-9407-08002...b9d01593-9dfb-11e7-8ca6-08002718d305:1-21,da2f0910-8767-11e6-b82d-08002718d305:1-164741 LAST_CONFLICT_FREE_TRANSACTION: 8fc848d7-9e1c-11e7-9407-08002...COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_APPLIED: 27992 COUNT_TRANSACTIONS_LOCAL_PROPOSED: 0 COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

111 / 152

************************** 2. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 15059231192196925:2 MEMBER_ID: b9d01593-9dfb-11e7-8ca6-08002... COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 28000 COUNT_CONFLICTS_DETECTED: 0 COUNT_TRANSACTIONS_ROWS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: 8fc848d7-9e1c-11e7-9407-08002...b9d01593-9dfb-11e7-8ca6-08002718d305:1-21,da2f0910-8767-11e6-b82d-08002718d305:1-164741 LAST_CONFLICT_FREE_TRANSACTION: 8fc848d7-9e1c-11e7-9407-08002...COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_APPLIED: 1 COUNT_TRANSACTIONS_LOCAL_PROPOSED: 28000 COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

112 / 152

Performance_SchemaYou can find GR information in the following Performance_Schema tables:

replication_applier_con�guration

replication_applier_status

replication_applier_status_by_worker

replication_connection_con�guration

replication_connection_status

replication_group_member_stats

replication_group_members

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

113 / 152

Status during recovery mysql> SHOW SLAVE STATUS FOR CHANNEL 'group_replication_recovery'\G

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

114 / 152

Status during recovery mysql> SHOW SLAVE STATUS FOR CHANNEL 'group_replication_recovery'\G

*************************** 1. row *************************** Slave_IO_State: Master_Host: <NULL> Master_User: gr_repl Master_Port: 0 ... Relay_Log_File: mysql4-relay-bin-group_replication_recovery.000001 ... Slave_IO_Running: No Slave_SQL_Running: No ... Executed_Gtid_Set: 5de4400b-3dd7-11e6-8a71-08002774c31b:1-814089, afb80f36-2bff-11e6-84e0-0800277dd3bf:1-5718 ... Channel_Name: group_replication_recovery

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

115 / 152

Sys SchemaThe easiest way to detect if a node is a member of the primary component (when thereare partitioning of your nodes due to network issues for example) and therefore a validcandidate for routing queries to it, is to use the sys table.

Additional information for sys can be found at https://goo.gl/XFp3bt

On the primary node:

[mysql3 ~]# mysql < /root/addition_to_sys_mysql8.sql

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

116 / 152

Sys SchemaIs this node part of PRIMARY Partition:

mysql3> SELECT sys.gr_member_in_primary_partition();+------------------------------------+| sys.gr_node_in_primary_partition() |+------------------------------------+| YES |+------------------------------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

117 / 152

Sys SchemaIs this node part of PRIMARY Partition:

mysql3> SELECT sys.gr_member_in_primary_partition();+------------------------------------+| sys.gr_node_in_primary_partition() |+------------------------------------+| YES |+------------------------------------+

To use as healthcheck:

mysql3> SELECT * FROM sys.gr_member_routing_candidate_status;+------------------+-----------+---------------------+----------------------+| viable_candidate | read_only | transactions_behind | transactions_to_cert |+------------------+-----------+---------------------+----------------------+| YES | YES | 0 | 0 |+------------------+-----------+---------------------+----------------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

118 / 152

LAB8: Sys Schema - Health CheckOn one of the non Primary nodes, run the following command:

mysql-sql> �ush tables with read lock;

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

119 / 152

LAB8: Sys Schema - Health CheckOn one of the non Primary nodes, run the following command:

mysql-sql> �ush tables with read lock;

Now you can verify what the healthcheck exposes to you:

mysql-sql> SELECT * FROM sys.gr_member_routing_candidate_status;+------------------+-----------+---------------------+----------------------+| viable_candidate | read_only | transactions_behind | transactions_to_cert |+------------------+-----------+---------------------+----------------------+| YES | YES | 950 | 0 |+------------------+-----------+---------------------+----------------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

120 / 152

LAB8: Sys Schema - Health CheckOn one of the non Primary nodes, run the following command:

mysql-sql> �ush tables with read lock;

Now you can verify what the healthcheck exposes to you:

mysql-sql> SELECT * FROM sys.gr_member_routing_candidate_status;+------------------+-----------+---------------------+----------------------+| viable_candidate | read_only | transactions_behind | transactions_to_cert |+------------------+-----------+---------------------+----------------------+| YES | YES | 950 | 0 |+------------------+-----------+---------------------+----------------------+

mysql-sql> UNLOCK TABLES;

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

121 / 152

application interaction

MySQL Router

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

122 / 152

MySQL RouterMySQL Router is lightweight middleware that provides transparent routing between yourapplication and backend MySQL Servers. It can be used for a wide variety of use cases,such as providing high availability and scalability by effectively routing database traffic toappropriate backend MySQL Servers.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

123 / 152

MySQL RouterMySQL Router is lightweight middleware that provides transparent routing between yourapplication and backend MySQL Servers. It can be used for a wide variety of use cases,such as providing high availability and scalability by effectively routing database traffic toappropriate backend MySQL Servers.

MySQL Router doesn´t require any specific configuration. It configures itself automatically(bootstrap) using MySQL InnoDB Cluster´s metadata.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

124 / 152

LAB9: MySQL RouterWe will now use mysqlrouter between our application and the cluster.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

125 / 152

LAB9: MySQL Router (2)Configure MySQL Router that will run on the app server (mysql1). We bootstrap it usingthe Primary-Master:

[root@mysql1 ~]# mysqlrouter --bootstrap mysql3:3306 --user mysqlrouterPlease enter MySQL password for root: WARNING: The MySQL server does not have SSL ...

Bootstrapping system MySQL Router instance...MySQL Router has now been con�gured for the InnoDB cluster 'perconalive'.

The following connection information can be used to connect to the cluster.

Classic MySQL protocol connections to cluster 'perconalive':- Read/Write Connections: localhost:6446- Read/Only Connections: localhost:6447

X protocol connections to cluster 'perconalive':- Read/Write Connections: localhost:64460- Read/Only Connections: localhost:64470

[root@mysql1 ~]# chown -R mysqlrouter. /var/lib/mysqlrouter

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

126 / 152

LAB9: MySQL Router (3)Now let´s modify the configuration file to listen on port 3306:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

127 / 152

LAB9: MySQL Router (3)Now let´s modify the configuration file to listen on port 3306:

in /etc/mysqlrouter/mysqlrouter.conf:

[routing:perconalive_default_rw]-bind_port=6446+bind_port=3306

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

128 / 152

LAB9: MySQL Router (3)Now let´s modify the configuration file to listen on port 3306:

in /etc/mysqlrouter/mysqlrouter.conf:

[routing:perconalive_default_rw]-bind_port=6446+bind_port=3306

We can stop mysqld on mysql1 and start mysqlrouter into a screen session:

[mysql1 ~]# systemctl stop mysqld[mysql1 ~]# systemctl start mysqlrouter

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

129 / 152

LAB9: MySQL Router (4)Before killing a member we will change systemd´s default behavior that restartsmysqld immediately:

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

130 / 152

LAB9: MySQL Router (4)Before killing a member we will change systemd´s default behavior that restartsmysqld immediately:

in /usr/lib/systemd/system/mysqld.service add the following under[Service]

RestartSec=30

[mysql3 ~]# systemctl daemon-reload

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

131 / 152

LAB9: MySQL Router (5)Now we can point the application to the router (back to mysql1):

[mysql1 ~]# run_app.sh

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

132 / 152

LAB9: MySQL Router (5)Now we can point the application to the router (back to mysql1):

[mysql1 ~]# run_app.sh

Check app and kill mysqld on mysql3 (the Primary Master R/W node) !

[mysql3 ~]# kill -9 $(pidof mysqld)

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

133 / 152

LAB9: MySQL Router (5)Now we can point the application to the router (back to mysql1):

[mysql1 ~]# run_app.sh

Check app and kill mysqld on mysql3 (the Primary Master R/W node) !

[mysql3 ~]# kill -9 $(pidof mysqld)

mysql2> select member_host from performance_schema.replication_group_members where member_role='PRIMARY';+-------------+| member_host |+-------------+| mysql4 |+-------------+

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

134 / 152

ProxySQL / HA Proxy / F5 / ...

3rd party router/proxy

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

135 / 152

3rd party router/proxyMySQL InnoDB Cluster can also work with third party router / proxy.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

136 / 152

3rd party router/proxyMySQL InnoDB Cluster can also work with third party router / proxy.

If you need some specific features that are not yet available in MySQL Router, liketransparent R/W splitting, then you can use your software of choice.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

137 / 152

3rd party router/proxyMySQL InnoDB Cluster can also work with third party router / proxy.

If you need some specific features that are not yet available in MySQL Router, liketransparent R/W splitting, then you can use your software of choice.

The important part of such implementation is to use a good health check to verify if theMySQL server you plan to route the traffic is in a valid state.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

138 / 152

3rd party router/proxyMySQL InnoDB Cluster can also work with third party router / proxy.

If you need some specific features that are not yet available in MySQL Router, liketransparent R/W splitting, then you can use your software of choice.

The important part of such implementation is to use a good health check to verify if theMySQL server you plan to route the traffic is in a valid state.

MySQL Router implements that natively, and it´s very easy to deploy.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

139 / 152

ProxySQL also has native support for GroupReplication which makes it maybe the bestchoice for advanced users.

3rd party router/proxyMySQL InnoDB Cluster can also work with third party router / proxy.

If you need some specific features that are not yet available in MySQL Router, liketransparent R/W splitting, then you can use your software of choice.

The important part of such implementation is to use a good health check to verify if theMySQL server you plan to route the traffic is in a valid state.

MySQL Router implements that natively, and it´s very easy to deploy.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

140 / 152

operational tasks

Recovering Node

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

141 / 152

Recovering Nodes/MembersThe old master (mysql3) got killed.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

142 / 152

Recovering Nodes/MembersThe old master (mysql3) got killed.

MySQL got restarted automatically by systemd

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

143 / 152

Recovering Nodes/MembersThe old master (mysql3) got killed.

MySQL got restarted automatically by systemd

Let´s add mysql3 back to the cluster

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

144 / 152

LAB10: Recovering Nodes/Members[mysql3 ~]# mysqlsh

mysql-js> \c root@mysql4:3306 # The current master

mysql-js> cluster = dba.getCluster()

mysql-js> cluster.status()

mysql-js> cluster.rejoinInstance("root@mysql3:3306")

Rejoining the instance to the InnoDB cluster. Depending on the originalproblem that made the instance unavailable, the rejoin operation might not besuccessful and further manual steps will be needed to �x the underlyingproblem.

Please monitor the output of the rejoin operation and take necessary action ifthe instance cannot rejoin.

Please provide the password for 'root@mysql3:3306': Rejoining instance to the cluster ...

The instance 'root@mysql3:3306' was successfully rejoined on the cluster.

The instance 'mysql3:3306' was successfully added to the MySQL Cluster.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

145 / 152

mysql-js> cluster.status(){ "clusterName": "perconalive", "defaultReplicaSet": { "name": "default", "primary": "mysql4:3306", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "mysql2:3306": { "address": "mysql2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql3:3306": { "address": "mysql3:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql4:3306": { "address": "mysql4:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }}

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

146 / 152

Recovering Nodes/Members (automatically)This time before killing a member of the group, we will persist the configuration on disk inmy.cnf.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

147 / 152

Recovering Nodes/Members (automatically)This time before killing a member of the group, we will persist the configuration on disk inmy.cnf.

We will use again the same MySQL command as previouslydba.con�gureLocalInstance() but this time when all nodes are already partof the Group.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

148 / 152

LAB10: Recovering Nodes/Members (2)Verify that all nodes are ONLINE.

...mysql-js> cluster.status()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

149 / 152

LAB10: Recovering Nodes/Members (2)Verify that all nodes are ONLINE.

...mysql-js> cluster.status()

Then on all nodes run:

mysql-js> dba.con�gureLocalInstance()

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

150 / 152

LAB10: Recovering Nodes/Members (3)Kill one node again:

[mysql3 ~]# kill -9 $(pidof mysqld)

systemd will restart mysqld and verify if the node joined.

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

151 / 152

Thank you !

Any Questions ?

Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.

152 / 152