Mysql backup and recovery and xtrabackup backup

[transfer] Mysql backup recovery and xtrabackup backup

🎉 New Year greetings 🎉

2017 is about to pass, and the bell of the new year is about to ring. At this beautiful moment of farewell to the old and ushering in the new year, I would like to extend new year's greetings to the people of all ethnic groups across the country, compatriots in the Hong Kong Special Administrative Region and the Macao Special Administrative Region, compatriots in Taiwan and overseas Chinese, operation and maintenance engineers working in the front line, friends who have made contributions to the cause of open source, and friends from all over the world!

Today is the last day of 2017. On such a special day, I hope everyone can be happy and happy. I hope we can achieve something and create different value in 2018.

Let's welcome the bell of the new year with confidence and expectation!

Thank you.

Reasons for backup 1.1

Backup is the last line of defense for data security. For any scenario of data loss, although backup may not recover 100% of the data (depending on the backup cycle), it can at least minimize the loss. There are two important indicators to measure backup and recovery: recovery point objective (RPO) and recovery time objective (RTO). The former focuses on how far it can be recovered, while the latter focuses on how long it takes to recover.

1.1.1 directory of backup

Disaster recovery: recover and restore damaged data

Demand change: the data needs to be restored before the change due to demand change

Test: test whether the new function is available

1.1.2 issues to be considered in backup

How long data loss can be tolerated;

How long does it take to recover data;

Whether services need to be provided continuously during recovery;

The object to be recovered is the entire library, multiple tables, or a single library or a single table.

1.1.3 types of backup

Hot backup:

These dynamic backups are carried out in the process of reading or modifying data, with little or no interruption of the function of transmitting or processing data. When hot backup is used, the system can still be accessed for reading and modifying data.

Cold backup:

These backups take place when the user cannot access the data, so the data cannot be read or modified. These offline backups prevent any activity that uses data. These types of backups do not interfere with the performance of a functioning system. However, for some applications, it is unacceptable that users must be locked or completely blocked from accessing data for a long time.

Warm backup:

These backups are performed when the data is read, but in most cases, the data itself cannot be modified during the backup. The advantage of this midway backup type is that it is not necessary to completely lock down the end user. However, the disadvantage is that the dataset cannot be modified during backup, which may make this type of backup unsuitable for some applications. Failure to modify data during backup may cause performance problems.

1.2 backup method

1.2.1 cold backup

The simplest backup method is to shut down the MySQL server, and then copy and save all the files under the data directory. When it needs to be restored, copy the directory to the machine that needs to be restored. This method is really convenient, but it has little effect in the production environment. Because all machines need to provide services, and even Slave sometimes needs to provide read-only services, it is unrealistic to shut down MySQL and stop service backup. A concept corresponding to cold backup is hot backup. The so-called hot backup is to backup without affecting MySQL's external services.

Cold backup and stop business for backup.

1.2.2 snapshot backup

The first hot backup to be introduced is snapshot backup, which refers to backing up the database through the snapshot function supported by the file system. The principle of backup is to put all database files in the same partition, and then snapshot the partition. For Linux, it needs to be realized through LVM (logical volume manager). LVM uses copy on write technology to create snapshots. For example, the logical copy of an instant of the entire volume is similar to the MVCC of innodb storage engine in the database, but the snapshot of LVM is at the file system level, while MVCC is at the database level, and only supports innodb storage engine.

LVM has a snapshot reservation area. If the original volume data changes, LVM ensures that the affected blocks will be copied to the snapshot reservation area before any changes are written. In short, all old data consistent with the beginning of the snapshot point is retained in the snapshot area. For databases with few updates, snapshots can also be very small.

For MySQL, in order to use snapshot backup, you need to put the data files and log files in a logical volume, and then snapshot backup the volume. Because snapshot backup can only be performed locally, if the local disk is damaged, the snapshot will be damaged. Snapshot backup is more inclined to prevent misoperation. It can quickly restore the database to the point in time generated by the snapshot, and then restore it to the specified point in time combined with binary log. The basic principle is shown in the figure below:

 

1.2.3 logical backup (text representation: SQL statement)

Because of their disadvantages, cold backup and snapshot backup are rarely used in the production environment. They use more MySQL's own logical backup and physical backup tools. This section mainly focuses on logical backup. MySQL officially provides Mysqldump logical backup tool. Although it is good enough, it has the problem of slow single thread backup. mydumper provides a better logical backup tool in the community. Its advantages are mainly reflected in multi-threaded backup and faster backup speed.

1.2.4 other common backup methods

Physical backup (binary copy of data file)

Full backup concept

Full data is all the data in the database (or all the data in a database);

Full backup is to back up all the data in the database.

mysqldump will get consistent data at one time

Incremental backup (refresh binary log)

Incremental data refers to the data updated by the database from the last full backup to the next full backup

For mysqldump,binlog is incremental data

1.2.5 introduction to backup tools

1. mysqldump: mysql native comes with a very useful logical backup tool

2. mysqlbinlog: the original ecological command for binlog backup

3. Xtrabackup: a high-performance physical backup tool developed by precona company

1.3 introduction to mysqldump backup

The basic process of backup is as follows:

1.call FTWRL(flush tables with read lock),Global disable read and write
2.Enable snapshot reading to obtain the snapshot at this time(Only for innodb The watch works)
3.Backup non innodb Table data(*.frm,*.myi,*.myd etc.)
4.wrong innodb After the table backup is completed, release the FTWRL lock
5.Backup one by one innodb Table data
6.Backup complete.

For the whole process, you can refer to a diagram of my colleague, but his diagram only considers the backup of innodb tables. In fact, before the execution of unlock tables, the non innodb tables have been backed up, and the following t1,t2 and t3 are essentially innodb tables. Moreover, mysqldump in 5.6 uses the savepoint mechanism to release the MDL lock on a table every time a table is backed up, so as to avoid locking a table for a longer time.

1.3.1 mysqldump backup process

 

1.3.2 common backup parameters

 

parameter

Parameter description

-A

Backup full library

-B

Prepare all tables under a database

-R, --routines

Backup stored procedure and function data

--triggers

Backup trigger data

--master-data={1|2}

Tell you the binlog location at the time after backup

If it is equal to 1, it is printed as the CHANGE MASTER command; If equal to 2, the command is prefixed with annotation symbols.

--single-transaction

Hot standby innodb engine

-F, --flush-logs

Refresh binlog log

-x, --lock-all-tables

Lock all tables in all databases. This is achieved by using global read locks throughout the dump.

-l, --lock-tables

Lock all tables for reading

-d

Table structure only

-t

Data only

--compact

Reduce useless data output (debugging)

A complete backup statement:

The backup command of innodb engine is as follows:

mysqldump -A -R --triggers --master-data=2 --single-transaction |gzip >/opt/all_$(date +%F).sql.gz 

The backup commands suitable for multi engine mixing (for example, myisam and innodb mixing) are as follows:

 

mysqldump -A -R --triggers --master-data=2 |gzip   >/opt/all_$(date +%F).sql.gz  

1.3.3 -A parameter

Backup full database, backup statement

  mysqldump -uroot -p123 -A  > /backup/full.sql

1.3.4 -B parameter

Prepare all tables under a database

Add the statements of "create" and "use library", which can directly connect multiple library names and back up multiple libraries at the same time * - B library 1 library 2

mysqldump -uroot -p123 -B world  > /backup/worldb.sql

Backup statement:

    create database if not existence
    use db1
    drop table
    create table
    insert into

When backing up the database without - B, only all tables under the database are backed up, and the database will not be created

Only individual databases can be backed up (generally used when backing up a single table)

mysqldump -uroot -p123 world  > /backup/world.sql

Backup sheet

mysqldump -uroot -p123 world  city  > /backup/world_city.sql

For the granularity of single table backup, the speed of restoring database data is the fastest.

Backing up multiple tables

mysqldump Library 1 Table 1 Table 2 Table 3 >Library 1.sql
mysqldump Library 2 Table 1 Table 2 Table 3 >Library 2.sql

Sub database backup: for cycle

mysqldump -uroot -p'mysql123' -B mysql ...
mysqldump -uroot -p'mysql123' -B mysql_utf8 ...
mysqldump -uroot -p'mysql123' -B mysql ...
......

Sub database backup

for name in `mysql -e "show databases;"|sed 1d`
do
 mysqldump -uroot -p'mysql123' -B $name
done

1.3.5 -- master data = {1 | 2} parameter

Tell you the binlog location at the time after backup

2 is a comment and 1 is a non comment, which is to be executed (master-slave copy)

[root@db02 logs]# sed -n '22p' /opt/t.sql
CHANGE MASTER TO MASTER_LOG_FILE='clsn-bin.000005', MASTER_LOG_POS=344;
[root@db02 logs]# mysqldump -B --master-data=2 clsn >/opt/t.sql

1.3.6 -- single transaction parameter

Hot standby innodb engine

Only innodb engine is supported

Using this parameter will start a transaction separately for backup, which is realized by using transaction snapshot technology.

The consistency of the table can be obtained without using the transaction engine

It reflects the isolation among the four characteristics of ACID. 99% of the production uses innodb transaction engine

Although hot backup is supported, it does not mean that you can back up at any time, especially in busy periods. Do not make backup strategies. Generally, you can back up at night.

The backup command of innodb engine is as follows:

mysqldump -A -B -R --triggers --master-data=2 --single-transaction |gzip >/opt/all.sql.gz

1.3.7 -- flush logs parameter / - F

Refresh binlog log

Back up the database at 0 pm every day

mysqldump -A -B -F >/opt/$(date +%F).sql
[root@db02 ~]# ll /application/mysql/logs/
-rw-rw---- 1 mysql mysql 168 Jun 21 12:06 clsn-bin.000001
-rw-rw---- 1 mysql mysql 168 Jun 21 12:06 clsn-bin.000002
-rw-rw---- 1 mysql mysql 210 Jun 21 12:07 clsn-bin.index

Tip: each library will be refreshed once         

1.3.8 compressed backup

Compressed backup command:

mysqldump -B --master-data=2 clsn|gzip >/opt/t.sql.gz

Decompression:

zcat t.sql.gz >t1.sql
gzip -d t.sql.gz #Delete compressed package
gunzip alL_2017-12-22.sql.gz 

A complete backup statement

The backup command of innodb engine is as follows:

mysqldump -A -R --triggers --master-data=2 --single-transaction |gzip >/opt/all.sql.gz

The backup commands suitable for multi engine mixing (for example, myisam and innodb mixing) are as follows:

mysqldump -A -R --triggers --master-data=2 |gzip   >/opt/alL_$(date +%F).sql.gz

1.3.9 practice of using Mysqldump backup for recovery

Back up the innodb engine database clsn and compress:

mysqldump -B -R --triggers --master-data=2 clsn|gzip >/opt/all_$(date +%F).sql.gz

Delete clsn database artificially:

[root@db02 opt]# mysql -e "drop database clsn;"
[root@db02 opt]# mysql -e "show databases;"

Restore database:

use gzip decompression gzip -d xxx.gz
shell> mysql </opt/all_2017-1222.sql
 or
mysql> set sql_log_bin=0;
Query OK, 0 rows affected (0.00 sec)
mysql> source /backup/alL_2017-12-22.sql

Validation data:

[root@db02 opt]#  mysql -e "use clsn;select * from test;"

1.4 [simulation] incremental recovery enterprise case

1.4.1 preconditions:

  1. Full backup (mysqldump) is available.

  2. In addition to full backup, there are all binlog incremental logs generated after full backup.

1.4.2 environmental preparation

(1) Preparation environment:

drop database clsn;
CREATE DATABASE clsn;
USE `clsn`;
CREATE TABLE `test` (
  `id` int(4) NOT NULL AUTO_INCREMENT,
  `name` char(20) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8;
INSERT INTO `test` VALUES (1,'clsn'),(2,'znix'),(3,'inca'),(4,'zuma'),(5,'kaka');

View the created data

mysql> select * from test;
+----+------+
| id | name |
+----+------+
|  1 | clsn |
|  2 | znix |
|  3 | inca |
|  4 | zuma |
|  5 | kaka |
+----+------+
5 rows in set (0.00 sec)

(2) Simulated environment:

mkdir /data/backup -p
date -s "2017/12/22"

Full backup:

mysqldump -B --master-data=2 --single-transaction clsn|gzip>/data/backup/clsn_$(date +%F).sql.gz

Simulation increment:

mysql -e "use clsn;insert into test values(6,'haha');"
mysql -e "use clsn;insert into test values(7,'hehe');"
mysql -e "select * from clsn.test;"

(3) Simulated deleted data:

date -s "2017/12/22 11:40"
mysql  -e "drop database clsn;show databases;"

Ten minutes after the problem occurred, the problem was found and the database was deleted

1.4.3 recovery data preparation

(1) iptables firewall is used to shield the writing of all applications.

[root@clsn ~]# iptables -I INPUT -p tcp --dport 3306 ! -s 172.16.1.51 -j DROP 
#<==Non 172.16.1.51 Access to port 3306 of the database is prohibited.

Or use mysql configuration parameters, but you need to restart the database

--skip-networking

Copy binary log files

cp -a /application/mysql/logs/clsn-bin.* /data/backup/

Intercept log

zcat clsn_2017-12-22.sql.gz >clsn_2017-12-22.sql
sed -n '22p' clsn_2017-12-22.sql
mysqlbinlog -d clsn --start-position=339 clsn-bin.000008 -r bin.sql

Logs to be recovered:

1.clsn_2017-12-22.sql
2.bin.sql
grep -i drop bin.sql 
sed -i '/^drop.*/d' bin.sql

1.4.4 data recovery

Recover data

[root@db02 backup]# mysql <clsn_2017-12-22.sql
[root@db02 backup]# mysql -e "show databases;"
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| clsn               |
| znix               |
| performance_schema |
+--------------------+

view the database

mysql> select * from test;
+----+------+
| id | name |
+----+------+
|  1 | clsn |
|  2 | znix |
|  3 | inca |
|  4 | zuma |
|  5 | kaka |
+----+------+
5 rows in set (0.00 sec)

Restore incremental data:

[root@db02 backup]# mysql clsn <bin.sql
mysql> select * from test;
+----+------+
| id | name |
+----+------+
|  1 | clsn |
|  2 | znix |
|  3 | inca |
|  4 | zuma |
|  5 | kaka |
|  6 | haha |
|  7 | hehe |
+----+------+
7 rows in set (0.00 sec)

Recovery complete.

Adjust iptables to allow user access

1.4.5 multiple binlog problems

mysqlbinlog -d clsn --start-position=339 clsn-bin.000009 clsn-bin.0000010 -r bin1.sql

mysql clsn <bin1.sql

1.5 actual production tragedy of MySQL database

1.5.1 occurrence background

1. The mysql server will be fully backed up at 0:00 every night

2. A developer drank tea one sunny morning and gracefully deleted clsn by mistake_ OSS (core) database.

3. The company's business stopped abnormally and the service could not be provided normally.

1.5.2 how to solve it

1. Evaluate the current system.

What's damaged? Is there a backup,

Data recovery time (related to misoperated data, backup and recovery strategy),

Business recovery time

2. Recovery plan

(1) restore the full backup of point 0 to the test library

(2) restore the binlog from 0 to the point of failure time to the test library

(3) export the misoperation data and restore it to the production library.

(4) check whether the data is complete (development test environment test recovery successful database)

(5) restart the production business after the inspection

1.5.3 project summary

1. After my recovery process, the overall business is re provided in 30 minutes (slow...)

2. Formulate strict development specifications, development and development in future work.

3. Formulate better architecture scheme in the future.

1.6 selection of backup tools

Data volume range: 30G -- > TB level

1.6.1 large amount of data and small amount of transformation

(1) the cost of full backup is high. mysqldump+binlog realizes full backup + incremental backup. The disadvantage is that the recovery cost is higher than the backup time cost

(2) xtrabackup: a full backup can be done for a long time, and the rest of the time is incremental. The cost of full backup space is very high. If the amount of data is 30g -- > TB, xtrabackup tool is more recommended.

1.6.2 small amount of data and large amount of change

Only the time cost needs to be considered.

Only full backup can be used, and both tools can be selected. xtrabackup is less expensive to restore

1.6.3 large amount of data and changes

Time cost and space cost should be considered.

When the data volume reaches PB or higher (facebook), mysqldump may become the first choice. It takes up less space, but the technology cost is high. Secondary development of mysqldump is required (preferred by large data companies).

1.7 xtrabackup backup software

Percona official website https://www.percona.com/

1.7.1 introduction to xtrabackup

Xtrabackup is a free database hot backup software open source by percona. It can back up InnoDB database and XtraDB storage engine database non blocking (table lock is also required for MyISAM backup); Mysqldump backup method is a logical backup. Its biggest defect is that the backup and recovery speed is slow. If the database is larger than 50G, mysqldump backup is not suitable.

After the installation of xtrabackup, there are four executable files, of which two important backup tools are innobackupex and xtrabackup

1)xtrabackup It's dedicated to backup InnoDB Table, and mysql server No interaction;
2)innobackupex It's a package xtrabackup of Perl Script, supporting simultaneous backup innodb and myisam,But right myisam A global read lock needs to be added during backup.
3)xbcrypt Encryption and decryption backup tool
4)xbstream Package transmission tool, similar tar
5)The performance of physical backup tools is much better than that of logical backup based on the amount of data at the same level, especially when the amount of data is large.

1.7.1 Xtrabackup advantages

1) fast backup speed and reliable physical backup

2) the backup process will not interrupt the ongoing transaction (no need to lock the table)

3) it can save disk space and traffic based on compression and other functions

4) automatic backup verification

5) fast restore speed

6) the backup can be transferred to another machine

7) backup data without increasing server load

8) the performance of physical backup tools is much better than that of logical backup based on the amount of data at the same level. Tens of gigabytes to no more than terabytes. However, at the same data volume level, physical backup has certain advantages in restoring data.

1.7.2 backup principle

Copy data file, copy data page

Hot standby can be realized for innodb table.

    (1)When the database still has modification operations, the data files are directly backed up. At this time, the backed up data is relevant to the current database mysql It is inconsistent in terms of.
    (2)The backup process redo and undo Prepare them together.
    (3)In order to restore, just ensure the data pages backed up lsn Can and redo lsn Matching, the consistent data will be recovered in the future. redo Application and undo Application.

For myisam table, it can automatically lock the table and copy the file.

At the beginning of backup, a background detection process will be started to detect the changes of mysql redo in real time. Once a new log is found to be written, the log will be recorded in the background log file xtrabackup immediately_ Log, and then copy the innodb data file - SYSTEM tablespace file ibdatax. After the copy, execute flush tables with readlock, and then copy Frm, MYI, MyD and other files, finally execute unlock tables, and finally stop xtrabackup_log

1.7.3 xtrabackup installation

Install dependencies

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo
yum -y install perl perl-devel libaio libaio-devel perl-Time-HiRes perl-DBD-MySQL

Download the software package and install the software

wget https://www.percona.com/downloads/XtraBackup/Percona-XtraBackup-2.4.4/binary/redhat/6/x86_64/percona-xtrabackup-24-2.4.4-1.el6.x86_64.rpm
yum -y install percona-xtrabackup-24-2.4.4-1.el6.x86_64.rpm

1.8 xtrabackup practice

1.8.1 full backup and recovery

At this stage, the innodb instance embedded in xtrabackup will be started and the xtrabackup log will be played back_ Log, apply the committed transaction information changes to the innodb data / table space, and roll back the uncommitted transactions (this process is similar to the instance recovery of innodb). The recovery process is as follows:

 

backups

Create backup directory

mkdir  /backup -p

 

Perform the first full backup

innobackupex --defaults-file=/etc/my.cnf --user=root --password=123 --socket=/application/mysql/tmp/mysql.sock --no-timestamp /backup/xfull

 

Preparation before recovery

Preparation before data recovery (merging xtabackup_log_file and backed up physical file)

innobackupex --apply-log --use-memory=32M /backup/xfull/

 

View the merged checkpoints. If the type is changed to full prepared, it is recoverable.

[root@db02 full]# cat xtrabackup_checkpoints 
backup_type = full-prepared
from_lsn = 0
to_lsn = 4114824
last_lsn = 4114824
compact = 0
recover_binlog_info = 0

 

Destroy database data file

[root@db02 full]# cd /application/mysql/data/
[root@db02 data]# ls
auto.cnf  db02.pid  ibdata2      mysql             mysql-bin.index     world
clsn      haha      ib_logfile0  mysql-bin.000001  oldboy
db02.err  ibdata1   ib_logfile1  mysql-bin.000002  performance_schema
[root@db02 data]# \rm -rf ./* 
[root@db02 data]# ls
[root@db02 data]# killall mysql

 

Recovery method

Method 1: directly copy the backup file back

cp -a /backup/full/ /application/mysql/data
chown -R mysql.mysql /application/mysql/data

Method 2: use innobackupex command to recover (recommended)

[root@db02 mysql]# innobackupex --copy-back /backup/xfull
[root@db02 mysql]# chown -R mysql.mysql /application/mysql/

Note: no matter which recovery method is used, it needs to be changed to the group owner after recovery to keep consistent with the procedure.

[root@db02 data]# cd /application/mysql/data/
[root@db02 data]# ls
clsn     ibdata2      ibtmp1  performance_schema            xtrabackup_info
haha     ib_logfile0  mysql   world
ibdata1  ib_logfile1  oldboy  xtrabackup_binlog_pos_innodb

Startup is database

[root@db02 data]#  /etc/init.d/mysqld start

1.8.2 incremental backup and recovery

In fact, the "incremental" processing in innobackupex incremental backup is mainly relative to innodb. For myisam and other storage engines, it is still a full copy (full backup)

The process of "incremental" backup is mainly by copying the changed "pages" in innodb (these changed data pages mean that the LSN of the "page" is greater than the LSN given in xtrabackup_checkpoints). Incremental backup is based on full backup. The data added for the first time must be based on the last full backup. Each subsequent backup is based on the last backup, and finally achieve consistent backup. The process of incremental backup is as follows, which is very similar to the process of full backup. The difference is only in step 2.

 

Where is incremental backup from?

Incremental based on last backup.

By default, redo is a set of two files with a fixed size. The file used is a polling method. It is not permanent, and the file may be overwritten at any time.

Note: never make a backup when business is busy.

What to back up

1. binlog can be used as an increment

2. The built-in incremental backup is based on the changed data pages after the last backup, and also backs up the undo and redo changes during the backup process

How to backup

1. Conduct the first full standby first

innobackupex  --user=root --password=123 --no-timestamp /bakcup/xfull

Modify the original library, modify the line of Xiaohong, and then commit.

2. Perform incremental backup

innobackupex --user=root --password=123  --incremental --no-timestamp --incremental-basedir=/backup/xfull/  /backup/xinc1

How to recover

 

1. First apply the full backup log (- - apply log, and there is no need to rollback -- redo only)

innobackupex --apply-log --redo-only /backup/xfull/     

2. Merge increment into full backup (consistent consolidation)

innobackupex --apply-log --incremental-dir=/backup/xinc1 /backup/xfull/
innobackupex --apply-log /backup/xfull

 

3. Restore after consolidation

Method 1: directly copy the backup file back

cp -a /backup/full/ /application/mysql/data
chown -R mysql.mysql /application/mysql/data

 

Method 2: use innobackupex command to recover (recommended)

[root@db02 mysql]# innobackupex --copy-back /backup/xfull
[root@db02 mysql]# chown -R mysql.mysql /application/mysql/

 

Note: no matter which recovery method is used, it needs to be changed to the group owner after recovery to keep consistent with the procedure.

1.8.3 database backup strategy

Full standby shall be carried out once a week on Sunday; From Monday to Saturday, do one day increment every day and poll once a week.

xfull       --apply-log --redo-only   ensure last-lsn=Monday increment start lsn
xinc1        Merge Monday's increment to full standby, and apply-log --redo-only  ensure last-lsn=Tuesday increment start lsn
xinc2        Incremental backup to Tuesday, and full backup apply-log --redo-only  ensure last-lsn=Wednesday increment start lsn
xinc3       Merge Wednesday's increment to full standby, and apply-log --redo-only  ensure last-lsn=Thursday increment start lsn
xinc4       Merge Thursday's increment to full standby, and apply-log --redo-only  ensure last-lsn=Friday increment start lsn
xinc5       Merge Friday's increment to full standby, and apply-log --redo-only  ensure last-lsn=Saturday increment start lsn
xinc6        Incremental full backup from Saturday,--apply-log  Ready to restore

 

1.8.4 case analysis of real production

Background: the website core system of a logistics company has 220G data and 100M-200M daily updates

Backup scheme: xtrabackup full backup + incremental backup

Backup policy (crontab):

1. It is fully available at 0:00 p.m. on Saturday

       0 0 * * 6 zjs_full.sh --- this line can not

2. Monday to Friday and Sunday are incremental, based on the increment of the previous day

       0 1 * * 0-5 zjs_inc.sh --- this line can not

Fault scenario:

At 2 p.m. on Wednesday, developers mistakenly deleted a table zjs_base, about 10G.

Project responsibilities:

1) specify the recovery scheme and make use of the existing backup;

2) recover the data deleted by mistake;

3) formulate operation and maintenance and development process specifications.

Recovery process:

    a)    Ready for last Saturday.
    b)    Merge Sunday, Monday and Tuesday increments.
    c)    Restore the above data in the test library. The current state of the data should be 1 a.m. on Wednesday:00
    d)    The data state to be recovered is around 2 p.m
    e)    From 1 o'clock binlog Restore to previous deletion
    f)    Export deleted tables zjs_base,Restore to the production library to verify data availability and integrity.
    g)    Start the application and connect to the database.

Summary: after 30 minutes, the deleted table was restored. The service stopped for a total of 40 minutes.

1.8.5 summary of fault recovery

Recovery ideas:

1. First, ensure that all applications are disconnected to ensure the safety of data.

2. Check whether the backup used for recovery exists.

3. Design a quick and simple scheme for safe recovery and formulate solutions to unexpected problems.

Specific recovery process:

        1,Get ready for last Saturday, and--apply-log --redo-only
        2,Consolidation increment, Sunday, Monday, Tuesday  --apply-log --redo-only Wednesday --apply-log
        3,Restore the above data in the test library. The current state of the data should be 1 a.m. on Wednesday:00
        4,The data to be recovered is deleted at about 2 p.m zjs_base Previous data status
              From 1 o'clock binlog Restore to the one before deletion events of position. 
        5,Export deleted tables zjs_base,Restore to the production library to verify data availability and integrity.
        6,Start the application and connect to the database.

 

Determine the time required for recovery

How long does the recovery window take?----> Estimated 3 hours
        Recover with you+verification+Related to the accident.
How long does the business stop?----> 6 Hours? Or more? Less?    

 

1.8.6 [simulation] production accident recovery

Data creation phase

1. Create the directory required for backup

mkdir full  inc1 inc2

 

2. All available on Sunday

innobackupex --user=root --password=123 --no-timestamp /backup/xbackup/full/

 

3. Simulation data change

use oldboy
create table test(id int,name char(20),age int);
insert into test values(8,'outman',99);
insert into test values(9,'outgirl',100);
commit;

 

4. Monday incremental backup

innobackupex --user=root --password=123 --incremental --no-timestamp --incremental-basedir=/backup/xbackup/full/ /backup/xbackup/inc1

 

5. Simulation data change

use oldboy
insert into test values(8,'outman1',119);
insert into test values(9,'outgirl1',120);
commit;

 

6. Incremental backup on Tuesday

innobackupex --user=root --password=123 --incremental --no-timestamp --incremental-basedir=/backup/xbackup/inc1 /backup/xbackup/inc2

 

  7. Insert new row operation

use oldboy
insert into test values(10,'outman2',19);
insert into test values(11,'outgirl2',10);
commit;

 

Simulated misoperation accident

In the simulation scenario, the test table was deleted by mistake at 2 p.m. on Tuesday

    use oldboy;
    drop table test;

 

Preparing to recover data

  1. Prepare xtrabackup backup and merge backup

innobackupex --apply-log --redo-only /backup/xbackup/full
innobackupex --apply-log --redo-only --incremental-dir=/backup/xbackup/inc1 /backup/xbackup/full
innobackupex --apply-log  --incremental-dir=/backup/xbackup/inc2 /backup/xbackup/full
innobackupex --apply-log /backup/xbackup/full

2. Confirm the starting point of binlog and prepare to intercept binlog.

 cd /backup/xbackup/inc2/
 cat xtrabackup_binlog_info 
 mysql-bin.000001    1121

   3. Intercept the binlog before the drop operation

    mysqlbinlog  --start-position=1121 /tmp/mysql-bin.000003 
    find drop previous event and postion Log interception, if 1437
    mysqlbinlog  --start-position=1121 --stop-position=1437    /tmp/mysql-bin.000003 >/tmp/incbinlog.sql
    

 

4. Close the database and back up the binary log

/etc/init.d/mysqld stop
cd /application/mysql/data/
cp mysql-bin.000001 /tmp

 

5. Delete all MySQL data

cd /application/mysql/data/
rm -rf *

 

Recover data

1. Restore the full backup data to the data directory

innobackupex --copy-back /backup/xbackup/full/
chown -R mysql.mysql /application/mysql/data/
/etc/init.d/mysqld start

 

  2. Recover binlog records

set sql_log_bin=0
source /tmp/incbinlog.sql

1.8.7 xtarbackup export

(1) "export" table exports a table in the Prepare phase of backup. Therefore, once the full backup is completed, you can export a table through the -- export option in the prepare process:

innobackupex --apply-log --export /path/to/backup

This command creates a table space for each innodb table exp end of the file, these with exp files can be used to import to other servers.

(2) "import" table to import an innodb table from other servers on mysql server, you need to create a table with the same structure as the original table on the current server before importing the table:

mysql> CREATE TABLE mytable (...)  ENGINE=InnoDB; 

Then delete the tablespace of this table:

mysql> ALTER TABLE mydatabase.mytable  DISCARD TABLESPACE; 

Next, the mytable of the mytable table from the server of the "export" table IBD and mytable Copy the exp file to the data directory of the current server, and then use the following command to "import": (remember to change the permission)

mysql> ALTER TABLE mydatabase.mytable  IMPORT TABLESPACE;

 

Example:

innobackupex --user=root --password=123 --no-timestamp /backup/xbackup/full/

 

Enter the full database directory

[root@db02 haha]# ls
db.opt  PENALTIES.frm  PENALTIES.ibd  PLAYERS.frm  PLAYERS.ibd
[root@db02 haha]# pwd
/backup/xbackup/full/haha

 

Export table

[root@db02 haha]# innobackupex --apply-log --export /backup/xbackup/full/  
[root@db02 haha]# ls
db.opt         PENALTIES.exp  PENALTIES.ibd  PLAYERS.exp  PLAYERS.ibd
PENALTIES.cfg  PENALTIES.frm  PLAYERS.cfg    PLAYERS.frm

 

Create a table with the same structure

CREATE TABLE `PLAYERS` (
  `PLAYERNO` int(11) NOT NULL,
  `NAME` char(15) NOT NULL,
  `INITIALS` char(3) NOT NULL,
  `BIRTH_DATE` date DEFAULT NULL,
  `SEX` char(1) NOT NULL,
  `JOINED` smallint(6) NOT NULL,
  `STREET` varchar(30) NOT NULL,
  `HOUSENO` char(4) DEFAULT NULL,
  `POSTCODE` char(6) DEFAULT NULL,
  `TOWN` varchar(30) NOT NULL,
  `PHONENO` char(13) DEFAULT NULL,
  `LEAGUENO` char(4) DEFAULT NULL,
  PRIMARY KEY (`PLAYERNO`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

 

Copy recovery data to the library

[root@db02 haha]# cp  PLAYERS.ibd  PLAYERS.exp  /application/mysql/data/backup/
cp: overwrite `/application/mysql/data/backup/PLAYERS.ibd'? y

 

Recover data

mysql> ALTER TABLE backup.PLAYERS  DISCARD TABLESPACE;
Query OK, 0 rows affected (0.00 sec)

 

1.8.8 innobackupex parameter description

parameter

Parameter description

--compress

This option represents a backup of compressed innodb data files.

--compress-threads   

This option indicates the number of worker threads compressed in parallel.

--compress-chunk-size

This option indicates the size of the worker buffer of each compressed thread. The unit is bytes. The default is 64K.

--encrypt            

This option indicates through ENCRYPTION_ALGORITHM's algorithm encrypts the backup of innodb data files. At present, the supported algorithms include ase128, aes192 and aes256.

--encrypt-threads    

This option indicates the number of worker threads encrypted in parallel.

--encrypt-chunk-size 

This option indicates the size of the worker buffer of each encryption thread. The unit is bytes. The default is 64K.

--encrypt-key        

This option uses an appropriate length to encrypt the key. It is not recommended because it will be recorded to the command line.

--encryption-key-file

This option means that the file must be a simple binary or text file. The encryption key can be generated through the following command line command: openssl rand -base64 24.

--include            

This option means that regular expressions are used to match the name of the table [db.tb], and it is required to specify the full name of the table to be backed up, that is, databasename tablename.

--user               

This option indicates the backup account.

--password           

This option indicates the password for the backup.

--port               

This option indicates the port of the backup database.

--host               

This option indicates the address of the backup database.

--databases

The parameter accepted by this option is the data name. If you want to specify multiple databases, they need to be separated by spaces; For example: "xtra_test dba_test". At the same time, when specifying a database, you can also specify only one of the tables. For example: "mydatabase.mytable". This option is not valid for innodb engine tables, or all innodb tables will be backed up. In addition, this option can also accept a file as a parameter, and each line in the file is an object to be backed up.

--tables-file     

This option indicates that the file containing the list of tables is specified, and the format is database Table, which is directly passed to -- tables file.

--socket          

This option represents mysql The location of the sock so that the backup process can log in to MySQL.

--no-timestamp    

This option can indicate that you do not want to create a timestamp directory to store backups, but assign it to the backup folder you want.

--ibbackup        

This option specifies which xtrabackup binary to use. IBBACKUP-BINARY is the command to run percona xtrabackup. This option applies to xtrbackup binaries that are not in your search and working directory. If this option is specified, innoabackupex automatically determines the binaries to use.

--slave-info      

This option means to use when backing up the slave. Print out the name of the master and binlog pos, and also write this information to xtrabackup with the command of change master_ slave_ Info file. You can start a slave Library Based on this backup.

--safe-slave-backup

This option indicates that to ensure consistent replication status, this option stops the SQL thread and waits until the slave in show status_ open_ temp_ Start the backup when tables is 0. If the temporary table is not opened, bakcup will start immediately. Otherwise, the SQL thread will start or close the temporary table that is not open. If slave_open_temp_tables is not 0 after -- safe slave backup timeout (300 seconds by default). The slave SQL thread will restart when the backup is completed.

--kill-long-queries-timeout

This option indicates the number of seconds to wait between the start of FLUSH TABLES WITH READ LOCK and the kill of those queries that block it. The default value is 0 and will not kill any queries. Using this option xtrabackup requires Process and super permissions.

--kill-long-query-type    

This option indicates the type of kill. By default, it is all. select is optional.

--ftwrl-wait-threshold    

This option indicates that a long query has been detected, in seconds, and indicates the threshold value of the long query.

--ftwrl-wait-query-type   

This option indicates that the query is allowed to be completed before obtaining the global lock. The default is ALL and the option is update.

--galera-info             

This option indicates that the xtrabackup file containing the state of the local node when the backup was created is generated_ galera_ Info file, this option is only applicable to backing up PXC.

--stream                  

This option indicates the format of streaming backup. After the backup is completed, it is sent to STDOUT in the specified format. At present, only tar and xbstream are supported.

--defaults-file           

This option specifies which file to read MySQL configuration from and must be placed at the first option on the command line.

--defaults-extra-file     

This option specifies which additional file to read MySQL configuration from before the standard defaults file, which must be in the position of the first option on the command line. It is generally used to save the configuration file of the user name and password of the backup user.

----defaults-group         

This option represents the group read from the configuration file, which is used when innobakcupex multiple instances are deployed.

--no-lock

This option means to close the table lock of FTWRL. Only when all tables are InnoDB tables and do not care about the binlog pos point of backup, this option should not be used if any DDL statement is executing or non InnoDB is updating (including the tables under mysql database). The consequence is that the backup data is inconsistent. If you consider that the backup fails to obtain the lock, you can consider -- safe slave backup to stop the replication thread immediately.

--tmpdir

This option indicates that when -- stream is specified, specify where the temporary file exists. Before streaming and copying to the remote server, the transaction log first exists in the temporary file. When using the parameter stream=tar for backup, your xtrabackup_logfile may be temporarily placed in the / tmp directory. If you write a large amount of data concurrently during backup, xtrabackup_logfile may be very large (5G +), and may fill your / tmp directory. You can solve this problem by specifying the directory with the parameter -- tmpdir.

--history              

This option indicates that the backup history of the percona server is recorded in percona_schema.xtrabackup_history table.

--incremental          

This option means to create an incremental backup. You need to specify -- incremental basedir.

--incremental-basedir  

This option indicates that a string parameter is accepted, specifying that the directory containing full backup is the base directory of incremental backup, which is used together with -- incremental.

--incremental-dir      

This option represents the directory of the incremental backup.

--incremental-force-scan

This option means that when an incremental backup is created, the data pages in all incremental backups are forced to be scanned.

--incremental-lsn   

This option indicates the LSN of the specified incremental backup, which is used with the -- incremental option.

--incremental-history-name

This option indicates that it is stored in PERCONA_SCHEMA.xtrabackup_history is the name of the history based on the incremental backup. Percona Xtrabackup searches the history table to find the most recent (innodb_to_lsn) successful backup and will_ The LSN value is used as an incremental backup to start an accident LSN It is mutually exclusive with innobackupex -- incremental history UUID. If no valid LSN is detected, xtrabackup will return error.

--incremental-history-uuid

This option indicates that it is stored in percona_schema.xtrabackup_history is based on the UUID of the specific history of the incremental backup.

--close-files            

This option means to close the file handle that is no longer accessed. When xtrabackup opens the table space, it usually does not close the file handle in order to correctly handle DDL operations. If the number of tablespaces is large, this is a way to close file handles that are no longer accessed. Using this option is risky and may result in inconsistent backups.

--compact                

This option means creating a compact backup without a secondary index.

--throttle               

This option indicates the number of IO operations per second. It is only valid for the bakcup phase. Apply log and -- copy back do not work. Do not use them together.

1.9 references

https://www.cnblogs.com/cchust/p/5452557.html
http://www.cnblogs.com/gomysql/p/3650645.html  xtrabackup Explain in detail
https://www.percona.com/software/mysql-database/percona-xtrabackup
https://learn.percona.com/hubfs/Manuals/Percona_Xtra_Backup/Percona_XtraBackup_2.4/Percona-XtraBackup-2.4.9.pdf

 

 

Tags: Database MySQL

Posted by titeroy on Thu, 12 May 2022 11:10:13 +0300