Percona XtraDB Cluster Webinar...Congure Node3 for PXC [mysqld] # Leave existing settings and add...

Post on 22-May-2020

11 views 0 download

Transcript of Percona XtraDB Cluster Webinar...Congure Node3 for PXC [mysqld] # Leave existing settings and add...

Percona XtraDB Cluster Webinar

Tutorial SetupSingle master node; Two slave nodesnode1 will also run sysbench

Sample application, simulating activity

© 2011 - 2017 Percona, Inc.2 / 38

Run Sample ApplicationStart the application and verify

Use another window for this or use screennode1# run_sysbench_oltp.sh

Is replication �owing to all the nodes?

© 2011 - 2017 Percona, Inc.3 / 38

XtraDB Cluster Tutorial

MIGRATING FROM MASTER/SLAVE

© 2011 - 2017 Percona, Inc.4 / 38

What is Our Plan?We have an application working on node1.We want to migrate our master/slave to a PXC clusterwith minimal downtime.We will build a cluster from one slave and then add theother servers to it one at a time.

© 2011 - 2017 Percona, Inc.5 / 38

Upgrade node3node3# systemctl stop mysql

node3# yum swap -- remove Percona-Server-shared-57 Percona-Server-server-57 \ -- install Percona-XtraDB-Cluster-shared-57 Percona-XtraDB-Cluster-server-57

node3# systemctl start mysql

node3# mysql_upgrade

node3# mysql -e "show slave status\G"

node3# systemctl stop mysql -- For the next steps

The Percona XtraDB Cluster Server and Client packagesare drop-in replacements for Percona Server and evencommunity MySQL.

© 2011 - 2017 Percona, Inc.6 / 38

Con�gure Node3 for PXC[mysqld]# Leave existing settings and add these

binlog_format = ROWlog-slave-updates

# galera settingswsrep_provider = /usr/lib64/libgalera_smm.sowsrep_cluster_name = myclusterwsrep_cluster_address = gcomm://node1,node2,node3wsrep_node_name = node3wsrep_node_address = 192.168.70.4wsrep_sst_auth = sst:secret

innodb_autoinc_lock_mode = 2

Start node3 with:

systemctl start mysql@bootstrap

© 2011 - 2017 Percona, Inc.7 / 38

Checking Cluster StateUse 'myq_status' to check the state of Galera on node3:node3# /usr/local/bin/myq_status wsrep

Replication:

node3# mysql -e "show slave status\G"

© 2011 - 2017 Percona, Inc.8 / 38

Upgrade node2We only need a single node replicating from ourproduction master.

node2> STOP SLAVE; RESET SLAVE ALL;

Upgrade/Swap packages as before

node2# systemctl stop mysql

node2# yum swap -- remove Percona-Server-shared-57 Percona-Server-server-57 \ -- install Percona-XtraDB-Cluster-shared-57 Percona-XtraDB-Cluster-server-57

Edit node2's my.cnf as you did for node3What needs to be different from node3's con�g?

© 2011 - 2017 Percona, Inc.9 / 38

node2's Con�g[mysqld]# Leave existing settings and add these

binlog_format = ROWlog-slave-updates

# galera settingswsrep_provider = /usr/lib64/libgalera_smm.sowsrep_cluster_name = myclusterwsrep_cluster_address = gcomm://node1,node2,node3wsrep_node_name = node2wsrep_node_address = 192.168.70.3wsrep_sst_auth = sst:secret

innodb_autoinc_lock_mode = 2

© 2011 - 2017 Percona, Inc.10 / 38

What's going to happen?When node2 is started, what's going to happen?How does node2 get a copy of the dataset?

Can it use the existing data it already has?Do we have to bootstrap node2?

© 2011 - 2017 Percona, Inc.11 / 38

State Snapshot Transfer (SST)Transfer a full backup of an existing cluster member(donor) to a new node entering the cluster (joiner).We con�gured our SST method to use ‘xtrabackup-v2’.

© 2011 - 2017 Percona, Inc.12 / 38

Additional Xtrabackup SST SetupWhat about that sst user in your my.cnf?

[mysqld]...wsrep_sst_auth = sst:secret...

User/Grants need to be added for Xtrabackup

node3> CREATE USER 'sst'@'localhost' IDENTIFIED BY 'secret';node3> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENTON *.* TO 'sst'@'localhost';

The good news is that once you get things �gured out the�rst time, it's typically very easy to get an SST the �rsttime on subsequent nodes.

© 2011 - 2017 Percona, Inc.13 / 38

Up and Running node2Now you can start node2

node2# systemctl start mysql

Watch node2's error log for progress

node2# tail -f /var/lib/mysql/error.log

© 2011 - 2017 Percona, Inc.14 / 38

Taking Stock of our AchievementsWe have a two node cluster:

Our production master replicates, asynchronously, intothe cluster via node3What might we do now in a real migration beforeproceeding?

© 2011 - 2017 Percona, Inc.15 / 38

Finishing the MigrationHow should we �nish the migration?How do we minimize downtime?What do we need to do to ensure there are no datainconsistencies?How might we rollback?

© 2011 - 2017 Percona, Inc.16 / 38

Our Migration StepsShutdown the application pointing to node1Shutdown (and RESET) replication on node3 from node1node3> STOP SLAVE; RESET SLAVE ALL;

Startup the application pointing to node3Rebuild node1 as we did for node2node1# systemctl stop mysqlnode1# yum swap -- remove Percona-Server-shared-57 Percona-Server-server-57 \-- install Percona-XtraDB-Cluster-shared-57 Percona-XtraDB-Cluster-server-57

© 2011 - 2017 Percona, Inc.17 / 38

node1's Con�g[mysqld]# Leave existing settings and add these

binlog_format = ROWlog-slave-updates

# galera settingswsrep_provider = /usr/lib64/libgalera_smm.sowsrep_cluster_name = myclusterwsrep_cluster_address = gcomm://node1,node2,node3wsrep_node_name = node1wsrep_node_address = 192.168.70.2wsrep_sst_auth = sst:secret

innodb_autoinc_lock_mode = 2

node1# systemctl start mysql

© 2011 - 2017 Percona, Inc.18 / 38

Where are We Now?

© 2011 - 2017 Percona, Inc.19 / 38

XtraDB Cluster Tutorial

APPLICATION HIGH AVAILABILITY

© 2011 - 2017 Percona, Inc.20 / 38

ProxySQL Architecture

© 2011 - 2017 Percona, Inc.21 / 38

Setting up ProxySQLInstall and Start ProxySQL - Just on node1

node1# yum install proxysql

node1# systemctl start proxysql

Create monitor user

node1 mysql> CREATE USER 'monitor'@'%' IDENTIFIED BY 'monit0r';

© 2011 - 2017 Percona, Inc.22 / 38

Con�guring of ProxySQLCon�guration is done via SQLProxySQL "speaks" MySQL protocolBackended by SQLiteChanges are made to "staging" area

Must be loaded to "runtime"Must be saved to disk to persist restart

node1# mysql -P 6032 -uadmin -padmin -h 127.0.0.1

proxysql> SHOW TABLES;

proxysql> UPDATE .. SET .. ;

proxysql> LOAD MYSQL [SERVERS|QUERY RULES|USERS|VARIABLES] TO RUNTIME;proxysql> SAVE MYSQL [SERVERS|QUERY RULES|USERS|VARIABLES] TO DISK;

© 2011 - 2017 Percona, Inc.23 / 38

New Ports and MonitoringListen on different ports; default is 6033

proxysql> UPDATE global_variablesSET variable_value = '0.0.0.0:4306;0.0.0.0:5306;/tmp/proxysql.sock'WHERE variable_name = 'mysql-interfaces';

Fix password for the monitor user we created earlier

proxysql> UPDATE global_variables SET variable_value = 'monit0r'WHERE variable_name = 'mysql-monitor_password';

Save and Restart ProxySQL

proxysql> SAVE MYSQL VARIABLES TO DISK;

proxysql> PROXYSQL RESTART;

© 2011 - 2017 Percona, Inc.24 / 38

Add Some ServersDe�ne our "master" / "slave" poolAdd hosts to create hostgroups with weights

-- Hostgroup 10 - WRITEproxysql> INSERT INTO mysql_servers (hostgroup_id, hostname, weight, comment)VALUES (10, '192.168.70.2', 1000000, 'WRITE');

-- Hostgroup 20 - READSproxysql> INSERT INTO mysql_servers (hostgroup_id, hostname, weight, comment)VALUES (20, '192.168.70.3', 1000, 'READ'),(20, '192.168.70.4', 1000, 'READ');

© 2011 - 2017 Percona, Inc.25 / 38

Add a UserAdd the user with which to connect to backend MySQLhosts.This user will be used both to authenticate againstProxySQL and be used as the account for connections tobackend MySQL servers.You can con�gure separate users for frontend andbackend authentication.

proxysql> INSERT INTO mysql_users (username, password,default_hostgroup) VALUES ('sbuser', 'sbpass', 10);

MySQL user:

node1> CREATE USER 'sbuser'@'%' IDENTIFIED BY 'sbpass';node1> GRANT ALL ON *.* TO 'sbuser'@'%';

© 2011 - 2017 Percona, Inc.26 / 38

Some Rules to FollowAdd the rules fo Read/Write Splitting.

proxysql> insert into mysql_query_rules (rule_id,destination_hostgroup,active,retries,match_digest,apply) values(1,10,1,3,'^SELECT.*FOR UPDATE',1);

proxysql> insert into mysql_query_rules (rule_id,destination_hostgroup,active,retries,match_digest,apply) values(2,20,1,3,'^SELECT.*$',1);

© 2011 - 2017 Percona, Inc.27 / 38

Load Everything to RuntimeThis far, everything has been written to the staging areawithin ProxySQL.Needs to be loaded to the "runtime"Needs to be saved to disk to persist ProxySQL restart.

proxysql>LOAD MYSQL SERVERS TO RUNTIME;LOAD MYSQL QUERY RULES TO RUNTIME;LOAD MYSQL USERS TO RUNTIME;

SAVE MYSQL SERVERS TO DISK;SAVE MYSQL QUERY RULES TO DISK;SAVE MYSQL USERS TO DISK;

© 2011 - 2017 Percona, Inc.28 / 38

Test Splitting Connectionnode3# i=0; while [ $i -lt 500 ]; domysql -BNe "SELECT @@hostname" -usbuser -psbpass -h node1 -P 4306 2>/dev/null;((i++));done | sort | uniq -c 245 node2 255 node3

Check ProxySQL Stats

node1# mysql -P6032 -ustats -pstats -h 127.0.0.1

proxysql> SELECT * FROM stats_mysql_query_rules;+---------+------+| rule_id | hits |+---------+------+| 1 | 0 || 2 | 500 |+---------+------+2 rows in set (0.00 sec)

© 2011 - 2017 Percona, Inc.29 / 38

Using the SchedulerProxySQL has an event scheduler that can run scripts

Uses 'Comment' �eld in failover logic

proxysql> INSERT INTO scheduler VALUES (1, 1, 3000, '/usr/bin/proxysql_galera_checker','10', '20', '1', '1','/var/lib/proxysql/proxysql_galera_check.log', 'Checker');

proxysql> LOAD SCHEDULER TO RUNTIME; SAVE SCHEDULER TO DISK;

© 2011 - 2017 Percona, Inc.30 / 38

Updating Applicationsysbench --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/oltp.lua--mysql-user=sbuser --mysql-password=sbpass --mysql-db=test --mysql-host=127.0.0.1 --mysql-port=4306 --mysql-ignore-errors=all --oltp-tables-count=1 --oltp-table-size=250000 --oltp-auto-inc=off --num-threads=1 --report-interval=1 --max-requests=0 --tx-rate=10 run | grep tps

ProxySQL:

proxysql> select * from stats_mysql_connection_pool;

© 2011 - 2017 Percona, Inc.31 / 38

Testing Out the SchedulerShutdown node3-1 and observe changes toruntime_mysql_servers

proxysql# select * from stats_mysql_connection_pool;

© 2011 - 2017 Percona, Inc.32 / 38

XtraDB Cluster Tutorial

MONITORING GALERA

© 2011 - 2017 Percona, Inc.33 / 38

Percona Monitoring and Management

© 2011 - 2017 Percona, Inc.(*) https://pmmdemo.percona.com/

34 / 38

Start PMM containerStart DockerCreate PMM Container

node1# systemctl restart dockernode1# docker pull percona/pmm-server:latestnode1# docker create \ -v /opt/prometheus/data \ -v /opt/consul-data \ -v /var/lib/mysql \ -v /var/lib/grafana \ --name pmm-data \ percona/pmm-server:latest /bin/true

© 2011 - 2017 Percona, Inc.35 / 38

Start the container

node1# docker run -d \ -p 80:80 \ --volumes-from pmm-data \ --name pmm-server \ --restart always \ percona/pmm-server:latest

Check the Dashboardopen http://192.168.70.2/graph/

© 2011 - 2017 Percona, Inc.36 / 38

Con�gure PMM Clients on All nodesnode1-3# pmm-admin config --server 192.168.70.2node1-3# pmm-admin add mysql

Check the Dashboard againopen http://192.168.70.2/graph/

© 2011 - 2017 Percona, Inc.37 / 38

Questions?

© 2011 - 2017 Percona, Inc.38 / 38