Master slave replication of a database that has been running for a period of time
Preface: a few days ago, the master-slave replication of MySQL database that has been running for a period of time and has not opened bin log was made. Hereby, the detailed operation steps and instructions are recorded. Note that this synchronization does not use GTID for master-slave replication. Wildcards are used in the configuration to synchronize only tables with specific names,
Master slave database version: both are MySQL 5 seven
1. Configure the main database, start bin log and restart the database
First edit the configuration file of the main database. The path of my database configuration file is: / etc / my cnf. Don't forget to back up first in case of configuration error.
cp /etc/my.cnf /etc/my.cnf.bak vim /etc/my.cnf
I have made the following detailed description:
server-id=234 # It is unique in the master-slave cluster. Don't repeat it. I use the last four digits of the ip address binlog-do-db=master_slave_test # Databases that need to be synchronized binlog-ignore-db=mysql # There is no need to open the bin log database log_bin=mysql-bin # Enable bin log master_info_repository=TABLE # Create a table from the library to record the status of the master relay_log_info_repository=TABLE # Create a table from the library to record the location information of synchronization relay_log_recovery=ON # If you restart from an unexpected shutdown of the library, you can immediately restore to the location of the unexecuted sql statement
Restart the primary database
systemctl restart mysqld
2. From the database configuration, specify the database to be synchronized and restart the database
server-id=167 slave-skip-errors=all # Skip all error numbers encountered in master-slave synchronization and continue to execute the following sql replicate_do_db=master_slave_test # Database to be replicated replicate_wild_do_table=master_slave_test.%table% # For the tables in the database to be copied, use wildcards to match the tables with the words' table ' slave-parallel-type=LOGICAL_CLOCK # Parallel replication slave-parallel-workers=4 # Number of slave threads master_info_repository=TABLE # Create a table from the library to record the status of the master relay_log_info_repository=TABLE # Create a table from the library to record the location information of synchronization relay_log_recovery=ON # If you restart from an unexpected shutdown of the library, you can immediately restore to the location of the unexecuted sql statement
Restart slave database
systemctl restart mysqld
3. Export data from main database
In the main library, use mysqldump tool to export and compress the data before bin log is opened.
mysqldump -uroot -P3317 -p --single-transaction --master-data=2 --databases Database 1 --tables table Table name 1 table Table name 2 >master_slave_test.sql gzip master_slave_test.sql
- --Single transaction: do not lock the table and ensure the consistency of dump data.
- --Master data: when = 1 is specified, a sentence change master to master will be added to the exported sql file_ LOG_ FILE='mysql-bin. 000001', MASTER_ LOG_ POS=1565461;， When importing from the library, the sql will be executed. When opening the master-slave from the library, we need to manually write a detailed change statement, so we don't want to execute the sql from the library, but we need to obtain the location information of bin log, so specify = 2, that is, annotate it in the exported sql file.
When the amount of data is very large, you can use nohup tool to execute mysqldump in the background. The specific operations are as follows:
nohup mysqldump -uroot -P3317 -p --single-transaction --master-data=2 --databases Database 1 --tables table Table name 1 table Table name 2 >master_slave_test.sql (Press enter to enter the database password) ctrl+z interrupt input bg Command background running
4. Import data from library
Log in to the server where the slave library is located, and the Copy the sql file and decompress it.
scp -P 22 -l 10000 email@example.com:/data/backup/master_slave_test.sql.gz . gzip -d master_slave_test.sql.gz
- -P: Specify the port for ssh login
- -l: Limit scp bandwidth to prevent impact on other services. The limit here is about 1M
After decompression, import data from the library. Before importing data, first create a master from the library_ slave_ Test database.
create database master_slave_test charset=utf8mb4; exit;
Then import the data.
mysql -P3317 -uroot -p master_slave_test <master_slave_test.sql
Similarly, you can also use the nohup tool to make the scp process or import data process run in the background.
5. The master database creates an account for slave database synchronization and gives synchronization permission
grant replication slave on *.* to 'slave_test'@'%' identified by 'Your password';
6. Start synchronization from the database;
CHANGE MASTER TO MASTER_HOST='10.10.8.103', MASTER_PORT=3317, MASTER_USER='slave_test', MASTER_PASSWORD='Your password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=489038 for channel 'master_slave_test';
- MASTER_HOST: ip address of the master database
- MASTER_PORt: the login port of the primary database
- MASTER_USER: the account just created in master data for synchronization
MASTER_LOG_FILE and MASTER_LOG_POS: Yes View the sql file. You can see it in the first 50 lines.
head -50 master_slave_test.sql
- Channel: specify the channel, and multiple libraries may be synchronized in the future, which is convenient for management.
start slave for channel 'master_slave_test';
View master-slave synchronization status:
show slave status \G;
I'll take out several parameters here and pay attention to IO_Thread and SQL_Thread: if both items are Yes, it means that the synchronization status is normal. If there is an error, No or Connecting is displayed, while IO below_ Error or SQL_Error displays detailed error information.
Slave_IO_Running: Yes Slave_SQL_Running: Yes Exec_Master_Log_Pos: 73894796 Relay_Log_Space: 73895318 Seconds_Behind_Master: 0 Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: