Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to...

70
Santa Clara, California | April 23th – 25th, 2018 Migrating to Amazon RDS via Xtrabackup

Transcript of Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to...

Page 1: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

Santa Clara California | April 23th ndash 25th 2018

Migrating to Amazon RDS via Xtrabackup

2

Who are we

Agustiacuten Gallego Support Engineer - Percona

Alexander Rubin Principal Consultant - Percona

3

Agenda

XtraBackup Amazon Relational Database Service (RDS) Pros and Cons of Migrating to RDS Migrating to RDS via XtraBackup Limitations

XtraBackup

5

Introduction to XtraBackup

XtraBackup is a free and open source hot backup tool for MySQL Percona Server and MariaDB are also

supported Implements functionality in MySQL

Enterprise Backup (InnoDB Hot Backup) and more

6

Introduction to XtraBackup

Supports hot (lockless) backups for InnoDB and XtraDB Locking backups for MyISAM Packaged for Linux operating systems only

DEB and RPM packages available Generic Linux tarball and source code

Main features Incremental and compressed backups Backup locks (as an alternative to FTWRL) Encrypted backups Streaming Ability to export individual tables and partitions

7

There are three main phases Backup

Copies all files needed Apply logs

Performs a crash recovery to leave in a consistent state Copy back

Moves the files to their final destination

Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this

How does it work

8

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull

Full backup - example

9

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed

Compressed backup - example 1

10

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream

Compressed backup - example 2

11

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G

shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental

Incremental backup - example

12

shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream

Parallel compressed backup - example

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 2: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

2

Who are we

Agustiacuten Gallego Support Engineer - Percona

Alexander Rubin Principal Consultant - Percona

3

Agenda

XtraBackup Amazon Relational Database Service (RDS) Pros and Cons of Migrating to RDS Migrating to RDS via XtraBackup Limitations

XtraBackup

5

Introduction to XtraBackup

XtraBackup is a free and open source hot backup tool for MySQL Percona Server and MariaDB are also

supported Implements functionality in MySQL

Enterprise Backup (InnoDB Hot Backup) and more

6

Introduction to XtraBackup

Supports hot (lockless) backups for InnoDB and XtraDB Locking backups for MyISAM Packaged for Linux operating systems only

DEB and RPM packages available Generic Linux tarball and source code

Main features Incremental and compressed backups Backup locks (as an alternative to FTWRL) Encrypted backups Streaming Ability to export individual tables and partitions

7

There are three main phases Backup

Copies all files needed Apply logs

Performs a crash recovery to leave in a consistent state Copy back

Moves the files to their final destination

Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this

How does it work

8

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull

Full backup - example

9

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed

Compressed backup - example 1

10

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream

Compressed backup - example 2

11

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G

shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental

Incremental backup - example

12

shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream

Parallel compressed backup - example

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 3: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

3

Agenda

XtraBackup Amazon Relational Database Service (RDS) Pros and Cons of Migrating to RDS Migrating to RDS via XtraBackup Limitations

XtraBackup

5

Introduction to XtraBackup

XtraBackup is a free and open source hot backup tool for MySQL Percona Server and MariaDB are also

supported Implements functionality in MySQL

Enterprise Backup (InnoDB Hot Backup) and more

6

Introduction to XtraBackup

Supports hot (lockless) backups for InnoDB and XtraDB Locking backups for MyISAM Packaged for Linux operating systems only

DEB and RPM packages available Generic Linux tarball and source code

Main features Incremental and compressed backups Backup locks (as an alternative to FTWRL) Encrypted backups Streaming Ability to export individual tables and partitions

7

There are three main phases Backup

Copies all files needed Apply logs

Performs a crash recovery to leave in a consistent state Copy back

Moves the files to their final destination

Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this

How does it work

8

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull

Full backup - example

9

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed

Compressed backup - example 1

10

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream

Compressed backup - example 2

11

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G

shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental

Incremental backup - example

12

shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream

Parallel compressed backup - example

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 4: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

XtraBackup

5

Introduction to XtraBackup

XtraBackup is a free and open source hot backup tool for MySQL Percona Server and MariaDB are also

supported Implements functionality in MySQL

Enterprise Backup (InnoDB Hot Backup) and more

6

Introduction to XtraBackup

Supports hot (lockless) backups for InnoDB and XtraDB Locking backups for MyISAM Packaged for Linux operating systems only

DEB and RPM packages available Generic Linux tarball and source code

Main features Incremental and compressed backups Backup locks (as an alternative to FTWRL) Encrypted backups Streaming Ability to export individual tables and partitions

7

There are three main phases Backup

Copies all files needed Apply logs

Performs a crash recovery to leave in a consistent state Copy back

Moves the files to their final destination

Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this

How does it work

8

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull

Full backup - example

9

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed

Compressed backup - example 1

10

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream

Compressed backup - example 2

11

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G

shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental

Incremental backup - example

12

shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream

Parallel compressed backup - example

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 5: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

5

Introduction to XtraBackup

XtraBackup is a free and open source hot backup tool for MySQL Percona Server and MariaDB are also

supported Implements functionality in MySQL

Enterprise Backup (InnoDB Hot Backup) and more

6

Introduction to XtraBackup

Supports hot (lockless) backups for InnoDB and XtraDB Locking backups for MyISAM Packaged for Linux operating systems only

DEB and RPM packages available Generic Linux tarball and source code

Main features Incremental and compressed backups Backup locks (as an alternative to FTWRL) Encrypted backups Streaming Ability to export individual tables and partitions

7

There are three main phases Backup

Copies all files needed Apply logs

Performs a crash recovery to leave in a consistent state Copy back

Moves the files to their final destination

Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this

How does it work

8

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull

Full backup - example

9

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed

Compressed backup - example 1

10

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream

Compressed backup - example 2

11

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G

shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental

Incremental backup - example

12

shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream

Parallel compressed backup - example

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 6: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

6

Introduction to XtraBackup

Supports hot (lockless) backups for InnoDB and XtraDB Locking backups for MyISAM Packaged for Linux operating systems only

DEB and RPM packages available Generic Linux tarball and source code

Main features Incremental and compressed backups Backup locks (as an alternative to FTWRL) Encrypted backups Streaming Ability to export individual tables and partitions

7

There are three main phases Backup

Copies all files needed Apply logs

Performs a crash recovery to leave in a consistent state Copy back

Moves the files to their final destination

Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this

How does it work

8

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull

Full backup - example

9

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed

Compressed backup - example 1

10

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream

Compressed backup - example 2

11

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G

shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental

Incremental backup - example

12

shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream

Parallel compressed backup - example

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 7: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

7

There are three main phases Backup

Copies all files needed Apply logs

Performs a crash recovery to leave in a consistent state Copy back

Moves the files to their final destination

Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this

How does it work

8

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull

Full backup - example

9

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed

Compressed backup - example 1

10

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream

Compressed backup - example 2

11

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G

shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental

Incremental backup - example

12

shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream

Parallel compressed backup - example

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 8: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

8

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull

Full backup - example

9

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed

Compressed backup - example 1

10

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream

Compressed backup - example 2

11

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G

shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental

Incremental backup - example

12

shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream

Parallel compressed backup - example

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 9: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

9

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed

Compressed backup - example 1

10

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream

Compressed backup - example 2

11

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G

shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental

Incremental backup - example

12

shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream

Parallel compressed backup - example

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 10: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

10

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G

shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream

Compressed backup - example 2

11

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G

shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental

Incremental backup - example

12

shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream

Parallel compressed backup - example

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 11: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

11

shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G

shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental

Incremental backup - example

12

shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream

Parallel compressed backup - example

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 12: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

12

shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream

Parallel compressed backup - example

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 13: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

13

Full took 9 min 20 sec resulting size 32 Gb

Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)

Incremental took 5 min 40 sec resulting size 62 Gb

Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)

Differences between them

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 14: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

14

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress

use pigz instead of gzip

Yet another compressed backup example

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 15: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

15

Yet another compressed backup example

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 16: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

16

Yet another compressed backup example

gzip

pigz

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 17: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

17

Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time

Much faster with binary backup ie xtrabackup

Logical vs Binary Backups

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 18: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

Amazon Relational Database Service (RDS)

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 19: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

19

What is RDS Aurora

Web Service targeted to easily setup operate scale

Features rapid provisioning scalable resources high availability automatic admin tasks

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 20: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

20

AWS Aurora Features

Storage Auto-Scaling (up to 64Tb) Replication

Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas

MySQL based replicas Scalability for reads

can autoscale and add more read replicas High Availability

Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)

Automatically attempt to recover

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 21: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

21

Read Scaling with RDS Aurora

RDS MySQL same as MySQL adding MySQL replication slaves

Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 22: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

Pros and Cons of Migrating to RDS

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 23: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

23

Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)

Pros of Migrating to RDS

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 24: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

24

RDS Aurora for MySQL

Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc

Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple

Availability Zones Aurora Serverless automatically scales database capacity up and down to match

your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries

by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 25: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

25

Cons of Migrating to RDS

Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory

Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 26: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

Migrating to RDS via XtraBackup

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 27: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

27

Announcement

httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 28: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

28

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

IAM account used should have access to S3

General Steps to Migrate

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 29: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

29

Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup

It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar

General Steps to Migrate

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 30: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

30

Use XtraBackup 23 latest patch version if possible

The innobackupex script is deprecated Choose timing wisely

even if it is a hot backup tool it will lock for some time Use the options from the documentation

parallel compressed compress-threads

Taking the Backup

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 31: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

31

shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK

Taking the Backup

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 32: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

32

shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile

real 6m15868s

Uploading to S3

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 33: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

33

Can also be done via web GUI

Uploading to S3

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 34: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

34

Uploading backups to S3

Full 6 min 38 sec 32 Gb

Incremental 1 min 32 sec 62 Gb

Compressed 1 min 50 sec 84 Gb

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 35: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

35

Creating the new RDS instance

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 36: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

36

Creating the new RDS instance

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 37: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

37

Creating the new RDS instance

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 38: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

38

Creating the new RDS instance

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 39: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

39

Creating the new RDS instance

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 40: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

40

Creating the new RDS instance

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 41: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

41

Creating the new RDS instance

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 42: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

42

Creating the new RDS instance

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 43: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

43

Creating the new RDS instance

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 44: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

44

Creating the new RDS instance

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 45: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

45

Creating the new RDS instance

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 46: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

46

Creating the new RDS instance

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 47: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

47

Creating the new RDS instance

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 48: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

48

Creating the new RDS instance

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 49: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

49

Creating the new RDS instance

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 50: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

50

Creating the new RDS instance

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 51: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

51

Creating the new RDS instance

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 52: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

52

Creating the new RDS instance

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 53: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

53

Creating the new RDS instance

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 54: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

54

Creating the new RDS instance

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 55: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

55

Creating the new RDS instance

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 56: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

56

How much time will it take to restore

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 57: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

57

How much time will it take to restore

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 58: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

58

Using an incremental backup

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 59: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

59

Using an incremental backup

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 60: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

60

Using an incremental backup

S3 folder path is empty because we will use all contents

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 61: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

61

Using a compressed backup

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 62: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

62

Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix

shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz

Using a split backup

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 63: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

63

Using the aws CLI command

shellgt aws rds restore-db-instance-from-s3

--db-instance-identifier rdsmigrationpl18cli

--db-instance-class dbt2large

--engine mysql

--source-engine mysql

--source-engine-version 5639

--s3-bucket-name rdsmigrationperconalive18

--s3-ingestion-role-arn arnawsiam123456789012userusername

--allocated-storage 100

--master-username rdspl18usercli

--master-user-password rdspl18usercli

--s3-prefix compressed_split_backup

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 64: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

Limitations

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 65: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

65

Only Perconas XtraBackup is supported it may work with forks but

Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported

only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket

The S3 bucket has to be in the same region

Limitations

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 66: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

66

Importing to a dbt2micro instance class is not supported it can be changed later

S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used

RDS limits the number of files on the S3 bucket to 1M they can be merged with targz

The following are not imported automatically Users Functions Stored Procedures Time zone information

Limitations

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 67: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

67

Limitations

Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported

--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical

copy

Questions

69

Rate Our Session

Thank You

Page 68: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

Questions

69

Rate Our Session

Thank You

Page 69: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

69

Rate Our Session

Thank You

Page 70: Migrating to Amazon RDS via Xtrabackup - Percona t… · AWS Aurora Features ... Migrating to previous versions is not supported Partial restores are not supported Import is only

Thank You