2014 OSDC Talk: Introduction to Percona XtraDB Cluster and HAProxy

Post on 20-Aug-2015

1.595 views 3 download

Tags:

Transcript of 2014 OSDC Talk: Introduction to Percona XtraDB Cluster and HAProxy

Introduction to Introduction to Percona XtraDB ClusterPercona XtraDB Cluster

and HAProxyand HAProxy2014.04.122014.04.12Bo-Yi WuBo-Yi Wuappleboyappleboy

22

About meAbout meGithub: @appleboyGithub: @appleboyTwitter: @appleboyTwitter: @appleboyBlog: http://blog.wu-boy.comBlog: http://blog.wu-boy.com

33

AgendaAgenda About Percona XtraDB ClusterAbout Percona XtraDB Cluster Install the first node of the clusterInstall the first node of the cluster Install subsequent nodes to the clusterInstall subsequent nodes to the cluster Install HAProxy on the application serverInstall HAProxy on the application server Testing with a real-world applicationTesting with a real-world application

4

Why useWhy usePercona XtraDB Cluster?Percona XtraDB Cluster?

5

MySQL ReplicationMySQL Replicationvsvs

Percona XtraDB ClusterPercona XtraDB Cluster

6

Async vs SyncAsync vs Sync

77

MySQL Replication: MySQL Replication: AsyncAsync

1...10...sec delay

88

MySQL Replication: MySQL Replication: AsyncAsync

99

syncsync

Event

Event confirm

10

Percona XtraDB ClusterPercona XtraDB ClusterFree and Open SourceFree and Open Source

1111

Percona XtraDB ClusterPercona XtraDB Cluster

Group Communication

1212

Percona XtraDB Cluster Percona XtraDB Cluster Synchronous replicationSynchronous replication Multi-master replication Multi-master replication Parallel applying on slaves Parallel applying on slaves Data consistencyData consistency Automatic node provisioning Automatic node provisioning

13

Synchronous Synchronous replicationreplication

1414

Virtually synchronousVirtually synchronous

15

Multi-master replication Multi-master replication

1616

Multi-master: MySQLMulti-master: MySQL

MySQL Replication

Write Fail

1717

Multi-master: XtraDB ClusterMulti-master: XtraDB Cluster

XtraDB Cluster

WriteWrite

Write

18

Parallel applying on slaves Parallel applying on slaves

1919

Parallel apply: MySQLParallel apply: MySQL

Write N threads

Apply 1 thread

2020

Write N threads

Apply N thread

Parallel apply: XtraDB ClusterParallel apply: XtraDB Cluster

21

Data consistencyData consistency

2222

XtraDB Cluster data consistencyXtraDB Cluster data consistency

== ==

23

Automatic node provisioning Automatic node provisioning

24

Group Communication

Copy Data

Join Cluster

25

How many nodes should I have?How many nodes should I have?

26

3 nodes is the minimal 3 nodes is the minimal recommended configurationrecommended configuration

>=3 nodes for quorum purpose

2727

Network Failure

Split brain

50% is not a quorum

28

Network Failure

XtraDB Cluster:Data consistency

29

garbdgarbdGalera Abitrator DaemonGalera Abitrator Daemon

30

Percona XtraDB Cluster Percona XtraDB Cluster LimitationsLimitations

31

Only Support InnoDB TableOnly Support InnoDB TableMyISAM support is limitedMyISAM support is limited

32

write performance?write performance?limited by weakest nodelimited by weakest node

33

Joing ProcessJoing Process

34

Group Communication

Copy Data

Join Cluster

SST1TB take long time

3535

State TransferState Transfer Full data SSTFull data SST

– New nodeNew node– Node long time disconnectedNode long time disconnected

Incremental ISTIncremental IST– Node disconnected short timeNode disconnected short time

3636

Snapshot State Transfer Snapshot State Transfer MysqldumpMysqldump

– Small databasesSmall databases RsyncRsync

– Donor disconnected for copy timeDonor disconnected for copy time– fasterfaster

XtraBackupXtraBackup– Donor disconnected for short timeDonor disconnected for short time– slowerslower

3737

Incremental State TransferIncremental State Transfer Node was in clusterNode was in cluster

– Disconnected for maintenanceDisconnected for maintenance– Node Crashed Node Crashed

38

Install viaInstall viaPercona's yum repositoryPercona's yum repository

39

$ yum -y install \$ yum -y install \Percona-XtraDB-Cluster-server \Percona-XtraDB-Cluster-server \Percona-XtraDB-Cluster-client \Percona-XtraDB-Cluster-client \Percona-Server-shared-compat \Percona-Server-shared-compat \percona-xtrabackuppercona-xtrabackup

40

Configuring the nodesConfiguring the nodes

41

wsrep_cluster_address=gcomm://wsrep_cluster_address=gcomm://– Initializes a new cluster for first nodeInitializes a new cluster for first node

wsrep_cluster_address=gcomm://<IP addr>, wsrep_cluster_address=gcomm://<IP addr>, <IP addr>, <IP addr><IP addr>, <IP addr>– Default port: 4567Default port: 4567

42

Don’t use wsrep_urlsDon’t use wsrep_urlswsrep_urls is deprecated since version wsrep_urls is deprecated since version 5.5.285.5.28

4343

Configuring the first nodeConfiguring the first node [mysqld][mysqld] wsrep_provider=/usr/lib64/libgalera_smm.sowsrep_provider=/usr/lib64/libgalera_smm.so wsrep_cluster_address = "wsrep_cluster_address = "gcomm://gcomm://"" wsrep_sst_auth=username:passwordwsrep_sst_auth=username:password wsrep_provider_options="gcache.size=4G"wsrep_provider_options="gcache.size=4G" wsrep_cluster_name=Perconawsrep_cluster_name=Percona wsrep_sst_method=xtrabackupwsrep_sst_method=xtrabackup wsrep_node_name=db_01wsrep_node_name=db_01 wsrep_slave_threads=4wsrep_slave_threads=4 log_slave_updateslog_slave_updates innodb_locks_unsafe_for_binlog=1innodb_locks_unsafe_for_binlog=1 innodb_autoinc_lock_mode=2innodb_autoinc_lock_mode=2

4444

Configuring subsequent nodesConfiguring subsequent nodes [mysqld][mysqld] wsrep_provider=/usr/lib64/libgalera_smm.sowsrep_provider=/usr/lib64/libgalera_smm.so wsrep_cluster_address = "wsrep_cluster_address = "gcomm://xxxx,xxxxgcomm://xxxx,xxxx"" wsrep_sst_auth=username:passwordwsrep_sst_auth=username:password wsrep_provider_options="gcache.size=4G"wsrep_provider_options="gcache.size=4G" wsrep_cluster_name=Perconawsrep_cluster_name=Percona wsrep_sst_method=xtrabackupwsrep_sst_method=xtrabackup wsrep_node_name=db_01wsrep_node_name=db_01 wsrep_slave_threads=4wsrep_slave_threads=4 log_slave_updateslog_slave_updates innodb_locks_unsafe_for_binlog=1innodb_locks_unsafe_for_binlog=1 innodb_autoinc_lock_mode=2innodb_autoinc_lock_mode=2

45

Monitoring MySQL StatusMonitoring MySQL Statusshow global status like 'show global status like 'wsrep%wsrep%'; ';

4646

Cluster integrityCluster integrity wsrep_cluster_sizewsrep_cluster_size

– Configuration versionConfiguration version wsrep_conf_idwsrep_conf_id

– Number of active nodesNumber of active nodes wsrep_cluster_statuswsrep_cluster_status

– Should be “Primary”Should be “Primary”

4747

Node StatusNode Status wsrep_readywsrep_ready

– Should be “On”Should be “On” wsrep_local_state_commentwsrep_local_state_comment

– Status messageStatus message wsep_local_send_q_avgwsep_local_send_q_avg

– Possible network bottleneckPossible network bottleneck wsrep_flow_control_pausedwsrep_flow_control_paused

– Replication lagReplication lag

48

Realtime Wsrep StatusRealtime Wsrep Status

https://github.com/jayjanssen/myq_gadgets

4949

Realtime Wsrep StatusRealtime Wsrep StatusPercona / db_03 / Galera 2.8(r165)Percona / db_03 / Galera 2.8(r165)Wsrep Cluster Node Queue Ops Bytes Flow Conflct PApply Commit Wsrep Cluster Node Queue Ops Bytes Flow Conflct PApply Commit time P cnf # cmt sta Up Dn Up Dn Up Dn p_ms snt lcf bfa dst oooe oool windtime P cnf # cmt sta Up Dn Up Dn Up Dn p_ms snt lcf bfa dst oooe oool wind11:47:39 P 73 3 Sync T/T 0 0 5 356 30K 149K 0.0 0 0 0 125 0 0 011:47:39 P 73 3 Sync T/T 0 0 5 356 30K 149K 0.0 0 0 0 125 0 0 011:47:40 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:40 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:41 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:41 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:42 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:42 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:43 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:43 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:44 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:44 P 73 3 Sync T/T 0 0 0 0 0 0 0.0 0 0 0 125 0 0 011:47:45 P 73 3 Sync T/T 0 0 0 3 0 1.1K 0.0 0 0 0 126 67 0 111:47:45 P 73 3 Sync T/T 0 0 0 3 0 1.1K 0.0 0 0 0 126 67 0 111:47:46 P 73 3 Sync T/T 0 0 0 2 0 994 0.0 0 0 0 126 0 0 011:47:46 P 73 3 Sync T/T 0 0 0 2 0 994 0.0 0 0 0 126 0 0 0

./myq_status -t 1 -h 127.0.0.1 wsrep

50

Application / ClusterApplication / Cluster

51

How Synchronous How Synchronous Writes workWrites work

52

Source NodeSource Nodepessimistic lockingpessimistic locking

InnoDB transaction locking

5353

Cluster replicationCluster replication Before source return commitsBefore source return commits

– Certify trx on all other nodesCertify trx on all other nodes Nodes reject on locking conflictsNodes reject on locking conflicts Commit successfully if no conflicts on Commit successfully if no conflicts on

any nodeany node

54

Node 1Tx Source

Node 2Accepted

Node 3Certify Fails

Client 2

Client 1

Update t set col = '12' where id = '1'

Update t set col = '12' where id = '1'

Update t set col = '12' where id = '1'

55

Application Care?Application Care?

56

Write to all nodesWrite to all nodesIncrease of deadlock errorsIncrease of deadlock errors

57

How to avoid deadlock How to avoid deadlock on all nodes?on all nodes?

5858

How to avoid deadlockHow to avoid deadlock Writing to only one nodeWriting to only one node

– All pessimistic locking happens on one nodeAll pessimistic locking happens on one node Different nodes can handle writes for Different nodes can handle writes for

different datasetsdifferent datasets– Different database, tables, rows etc.Different database, tables, rows etc.

59

Application to cluster connectsApplication to cluster connects

6060

Application to clusterApplication to cluster For writesFor writes

– Best practice: single nodeBest practice: single node For readsFor reads

– All nodes load balancedAll nodes load balanced glbd – Galera Load Balancerglbd – Galera Load Balancer HaproxyHaproxy

61

192.168.1.100 192.168.1.101 192.168.1.102

HAProxy Load BalancerHAProxy Load Balancer

Read/Write Read Read

62

HAProxy Load balancingHAProxy Load balancing

63

Read and Write Read and Write on the same porton the same port

64

frontend pxc-frontfrontend pxc-front bind *:3307bind *:3307 mode tcpmode tcp default_backend pxc-backdefault_backend pxc-back

backend pxc-backbackend pxc-back mode tcpmode tcp balance leastconnbalance leastconn option httpchkoption httpchk server db1 192.168.1.100:3306 check port 9200 inter server db1 192.168.1.100:3306 check port 9200 inter

12000 rise 3 fall 312000 rise 3 fall 3 server db2 192.168.1.101:3306 check port 9200 inter server db2 192.168.1.101:3306 check port 9200 inter

12000 rise 3 fall 312000 rise 3 fall 3 server db3 192.168.1.102:3306 check port 9200 inter server db3 192.168.1.102:3306 check port 9200 inter

12000 rise 3 fall 312000 rise 3 fall 3

65

Read and Write Read and Write on different porton different port

66

frontend pxc-onenode-frontfrontend pxc-onenode-front bind *:3308bind *:3308 mode tcpmode tcp default_backend pxc-onenode-backdefault_backend pxc-onenode-back

backend pxc-onenode-backbackend pxc-onenode-back mode tcpmode tcp balance leastconnbalance leastconn option httpchkoption httpchk server db1 192.168.1.100:3306 check port 9200 inter server db1 192.168.1.100:3306 check port 9200 inter

12000 rise 3 fall 312000 rise 3 fall 3 server db2 192.168.1.101:3306 check port 9200 inter server db2 192.168.1.101:3306 check port 9200 inter

12000 rise 3 fall 3 12000 rise 3 fall 3 backupbackup server db3 192.168.1.102:3306 check port 9200 inter server db3 192.168.1.102:3306 check port 9200 inter

12000 rise 3 fall 3 12000 rise 3 fall 3 backupbackup

6767

Application serverApplication server CentOS 6 base installationCentOS 6 base installation EPEL repo addedEPEL repo added HaProxy installed from EPEL repoHaProxy installed from EPEL repo Sysbench 0.5 packageSysbench 0.5 package

68

Live DemoLive Demo

69

Thank youThank you