23, Mysql read-write separation of Mycat

1, Introduction to mycat

MyCat is an open source distributed database system. It is a server that implements MySQL protocol. Front end users can regard it as a database agent and access it with MySQL client tools and command line. Its back end can communicate with multiple MySQL servers with MySQL native protocol or with most mainstream database servers with JDBC protocol. Its core function is to divide tables and databases, That is, a large table is horizontally divided into N small tables and stored in the back-end MySQL server or other databases.

With the development of the current version, MyCat is no longer a simple MySQL agent. Its back-end can support mainstream databases such as mysql, SQL Server, Oracle, DB2 and PostgreSQL. It also supports the storage of MongoDB, a new NoSQL mode, and will support more types of storage in the future. In the view of end users, regardless of the storage mode, in MyCat, it is a traditional database table, which supports standard SQL statements for data operation. In this way, for the front-end business system, it can greatly reduce the development difficulty and improve the development speed.

When our application only needs one database server, we don't need Mycat. If you need to divide databases or even tables, when the application has to face many databases, it needs to make an abstraction of the database layer to manage these databases, while the top application only needs to face the abstraction of the database layer or database middleware, which is the core role of Mycat.

Summary: Mycat is a well received database middleware. In order to reduce the pressure of single database, it can realize master-slave, hot standby, table and database, so as to realize the distributed architecture of database.

Reference blog: https://blog.csdn.net/nxw_tsp/article/details/56277430

2, Mycat principle

The principle of MYCAT is that the user is no longer directly accessing the back-end database. MYCAT accepts the SQL statements sent by the user. First, it makes some specific analysis of the SQL statements, such as fragment analysis, routing analysis, read-write separation analysis, cache analysis, etc., then sends the SQL to the real database at the back-end, processes the returned results appropriately, and finally returns them to the user.

The application program no longer directly accesses the Database, but accesses Mycat. Mycat interacts with the Database, and the Database data is returned to Mycat, which then returns to the application program. Three databases are the real Database, also known as three nodes, also known as three Shards.

Summary: as a middleware, Mycat can be accessed directly by the application program. Instead of managing the real database, Mycat can interact with the real database. There may be multiple real databases, which is the distributed architecture, that is, multi node (multi partition)

 

3, Mycat application scenario

Since the development of Mycat, the applicable scenarios have been very rich, and new users continue to give new and innovative solutions. The following are several typical application scenarios:

Simple read-write separation. At this time, the configuration is the simplest. It supports read-write separation. The master-slave switches to separate tables and libraries. For more than 10 million tables, it supports up to 100 billion single table fragmentation

For multi tenant applications, each application has a library, but the application is only connected to Mycat, so it does not transform the program itself to realize multi tenant

The report system can handle the statistics of large-scale reports with the help of the table splitting ability of Mycat

Replace Hbase and analyze big data

As a simple and effective scheme for real-time query of massive data, for example, 10 billion frequently queried records need to query the results within 3 seconds. In addition to the query based on primary key, there may also be range query or other attribute query. At this time, Mycat may be the simplest and effective option

Mycat long term Roadmap

Strengthen the functions of distributed database middleware, make it have rich plug-ins, powerful database intelligent optimization function, comprehensive system monitoring ability, and convenient data operation and maintenance tools, and realize advanced functions such as online data expansion and migration

Further advance into the field of big data computing, deeply combine with distributed real-time stream engines such as Spark Stream and Storm, which can complete the ability of OLAP direction such as fast giant table Association, sorting and grouping aggregation, and integrate some popular and commonly used real-time analysis algorithms, making it easier for engineers and DBA s to realize some advanced data analysis and processing functions with Mycat.

Continuously strengthen the technical level of Mycat open source community, attract more IT technical experts, make Mycat community become China's Apache, and push Mycat to the Apache foundation to become the top open source project in China. Finally, some volunteers can become full-time Mycat developers, and their glory and strength can be improved together.

Relying on Mycat community, gather 100 CXO level elites and raise funds to build Pro villa. Mycat community + Pro villa = China's largest IT O2O community

reference resources: https://www.cnblogs.com/andy6/p/6622324.html Included in: Mycat authoritative guide, thank the author

4, Preliminary preparation of mycat

4.1mycat architecture diagram

 

4.2mycat experimental environment

192.168.32.201  db01+mysql5.7 Multiple instances
192.168.32.202  db01+mysql5.7 Multiple instances
192.168.32.203  node3+mycat
System OS: CentOS Linux release 7.6.1810 (Core)
Mysql version: mysql-5.7.20-linux-glibc2.12-x86_64.tar.gz Binary deployment
mycat version: Mycat-server-1.6.7.1-release-20190627191042-linux.tar

mysql Related data directory
db01 db02
mysql Software catalog  /app/mysql
mysql Data directory /data
 Multi instance data directory /data/33{07..10}

db01
3307: server_id=7
3308: server_id=8
3309: server_id=9
3310: server_id=10

db02
3307: server_id=17
3308: server_id=18
3309: server_id=19
3310: server_id=20

4.3 binary deployment

reference resources https://www.cnblogs.com/yaokaka/p/13914362.html

4.4mysql experimental environment preparation

Two virtual machines db01 db02
 Create four for each mysql example:3307 3308 3309 3310

db01 and db02 operate in the same configuration

1. Create relevant directory initialization data

 
mkdir /data/33{07..10}/data -p
mysqld --initialize-insecure  --user=mysql --datadir=/data/3307/data --basedir=/app/mysql
mysqld --initialize-insecure  --user=mysql --datadir=/data/3308/data --basedir=/app/mysql
mysqld --initialize-insecure  --user=mysql --datadir=/data/3309/data --basedir=/app/mysql
mysqld --initialize-insecure  --user=mysql --datadir=/data/3310/data --basedir=/app/mysql

2. Prepare configuration files and startup scripts

db01

 
cat >/data/3307/my.cnf<<EOF
[mysqld]
basedir=/app/mysql
datadir=/data/3307/data
socket=/data/3307/mysql.sock
port=3307
log-error=/data/3307/mysql.log
log_bin=/data/3307/mysql-bin
binlog_format=row
skip-name-resolve
server-id=7
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF

cat >/data/3308/my.cnf<<EOF
[mysqld]
basedir=/app/mysql
datadir=/data/3308/data
port=3308
socket=/data/3308/mysql.sock
log-error=/data/3308/mysql.log
log_bin=/data/3308/mysql-bin
binlog_format=row
skip-name-resolve
server-id=8
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF

cat >/data/3309/my.cnf<<EOF
[mysqld]
basedir=/app/mysql
datadir=/data/3309/data
socket=/data/3309/mysql.sock
port=3309
log-error=/data/3309/mysql.log
log_bin=/data/3309/mysql-bin
binlog_format=row
skip-name-resolve
server-id=9
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/data/3310/my.cnf<<EOF
[mysqld]
basedir=/app/mysql
datadir=/data/3310/data
socket=/data/3310/mysql.sock
port=3310
log-error=/data/3310/mysql.log
log_bin=/data/3310/mysql-bin
binlog_format=row
skip-name-resolve
server-id=10
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF

cat >/etc/systemd/system/mysqld3307.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/app/mysql/bin/mysqld --defaults-file=/data/3307/my.cnf
LimitNOFILE = 5000
EOF

cat >/etc/systemd/system/mysqld3308.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/app/mysql/bin/mysqld --defaults-file=/data/3308/my.cnf
LimitNOFILE = 5000
EOF

cat >/etc/systemd/system/mysqld3309.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/app/mysql/bin/mysqld --defaults-file=/data/3309/my.cnf
LimitNOFILE = 5000
EOF
cat >/etc/systemd/system/mysqld3310.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target

[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/app/mysql/bin/mysqld --defaults-file=/data/3310/my.cnf
LimitNOFILE = 5000
EOF

db02

cat >/data/3307/my.cnf<<EOF
[mysqld]
basedir=/app/mysql
datadir=/data/3307/data
socket=/data/3307/mysql.sock
port=3307
log-error=/data/3307/mysql.log
log_bin=/data/3307/mysql-bin
binlog_format=row
skip-name-resolve
server-id=17
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/data/3308/my.cnf<<EOF
[mysqld]
basedir=/app/mysql
datadir=/data/3308/data
port=3308
socket=/data/3308/mysql.sock
log-error=/data/3308/mysql.log
log_bin=/data/3308/mysql-bin
binlog_format=row
skip-name-resolve
server-id=18
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF
cat >/data/3309/my.cnf<<EOF
[mysqld]
basedir=/app/mysql
datadir=/data/3309/data
socket=/data/3309/mysql.sock
port=3309
log-error=/data/3309/mysql.log
log_bin=/data/3309/mysql-bin
binlog_format=row
skip-name-resolve
server-id=19
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF


cat >/data/3310/my.cnf<<EOF
[mysqld]
basedir=/app/mysql
datadir=/data/3310/data
socket=/data/3310/mysql.sock
port=3310
log-error=/data/3310/mysql.log
log_bin=/data/3310/mysql-bin
binlog_format=row
skip-name-resolve
server-id=20
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
EOF

cat >/etc/systemd/system/mysqld3307.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/app/mysql/bin/mysqld --defaults-file=/data/3307/my.cnf
LimitNOFILE = 5000
EOF

cat >/etc/systemd/system/mysqld3308.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/app/mysql/bin/mysqld --defaults-file=/data/3308/my.cnf
LimitNOFILE = 5000
EOF

cat >/etc/systemd/system/mysqld3309.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/app/mysql/bin/mysqld --defaults-file=/data/3309/my.cnf
LimitNOFILE = 5000
EOF
cat >/etc/systemd/system/mysqld3310.service<<EOF
[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
ExecStart=/app/mysql/bin/mysqld --defaults-file=/data/3310/my.cnf
LimitNOFILE = 5000
EOF

3. Modify directory permissions and start multiple instances

chown -R mysql.mysql /data/*
systemctl start mysqld3307
systemctl start mysqld3308
systemctl start mysqld3309
systemctl start mysqld3310

4. Multi instance test

db01

 
[root@db01 ~]# mysql -S /data/3307/mysql.sock -e "show variables like 'server_id'"
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 7     |
+---------------+-------+
[root@db01 ~]# mysql -S /data/3308/mysql.sock -e "show variables like 'server_id'"
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 8     |
+---------------+-------+
[root@db01 ~]# mysql -S /data/3309/mysql.sock -e "show variables like 'server_id'"
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 9     |
+---------------+-------+
[root@db01 ~]# mysql -S /data/3310/mysql.sock -e "show variables like 'server_id'"
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 10    |
+---------------+-------+

db02

[root@db02 ~]# mysql -S /data/3307/mysql.sock -e "show variables like 'server_id'"
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 17    |
+---------------+-------+
[root@db02 ~]# mysql -S /data/3308/mysql.sock -e "show variables like 'server_id'"
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 18    |
+---------------+-------+
[root@db02 ~]# mysql -S /data/3309/mysql.sock -e "show variables like 'server_id'"
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 19    |
+---------------+-------+
[root@db02 ~]# mysql -S /data/3310/mysql.sock -e "show variables like 'server_id'"
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 20    |
+---------------+-------+

4.5 node master-slave planning

 
The arrow points to who is the main library
   192.168.32.201:3307    <----->   192.168.32.202:3307
   192.168.32.201:3309    ------>   192.168.32.201:3307
   192.168.32.202:3309    ------>   192.168.32.202:3307

   192.168.32.202:3308   <----->    192.168.32.201:3308
   192.168.32.202:3310   ------>    192.168.32.202:3308
   192.168.32.201:3310   ------>     92.168.32.201:3308

4.6 slice planning

 
shard1: 
    Master: 192.168.32.201:3307
    slave1: 192.168.32.201:3309
    Standby Master: 192.168.32.202:3307
    slave2: 192.168.32.202:3309
shard2: 
    Master: 192.168.32.202:3308
    slave1: 192.168.32.202:3310
    Standby Master: 192.168.32.201:3308
    slave2: 192.168.32.201:3310

4.7 master-slave relationship configuration

1,shard1

192.168.32.201:3307 <-----> 192.168.32.202:3307

db02

mysql  -S /data/3307/mysql.sock -e "grant replication slave on . to repl@'192.168.32.%' identified by '123';"
mysql  -S /data/3307/mysql.sock -e "grant all  on . to root@'192.168.32.%' identified by '123'  with grant option;"

db01

mysql  -S /data/3307/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.32.202', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3307/mysql.sock -e "start slave;"
mysql  -S /data/3307/mysql.sock -e "show slave status\G"

db02

mysql  -S /data/3307/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.32.201', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3307/mysql.sock -e "start slave;"
mysql  -S /data/3307/mysql.sock -e "show slave status\G"

192.168.32.201:3309 ----> 192.168.32.201:3307

db01

mysql  -S /data/3309/mysql.sock  -e "CHANGE MASTER TO MASTER_HOST='192.168.32.201', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3309/mysql.sock  -e "start slave;"
mysql  -S /data/3309/mysql.sock  -e "show slave status\G"

192.168.32.202:3309 ----> 192.168.32.202:3307

db02

 
mysql  -S /data/3309/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.32.202', MASTER_PORT=3307, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3309/mysql.sock -e "start slave;"
mysql  -S /data/3309/mysql.sock -e "show slave status\G"

2,shard2

192.168.32.202:3308 <-----> 192.168.32.201:3308

db01

mysql  -S /data/3308/mysql.sock -e "grant replication slave on . to repl@'192.168.32.%' identified by '123';"
mysql  -S /data/3308/mysql.sock -e "grant all  on . to root@'192.168.32.%' identified by '123'  with grant option;"

db02

mysql  -S /data/3308/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.32.201', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3308/mysql.sock -e "start slave;"
mysql  -S /data/3308/mysql.sock -e "show slave status\G"

db01

mysql  -S /data/3308/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.32.202', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3308/mysql.sock -e "start slave;"
mysql  -S /data/3308/mysql.sock -e "show slave status\G"

192.168.32.202:3310 ----> 192.168.32.202:3308

db02

mysql  -S /data/3310/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.32.202', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3310/mysql.sock -e "start slave;"
mysql  -S /data/3310/mysql.sock -e "show slave status\G"

192.168.32.201:3310 ----> 192.168.32.201:3308

db01

mysql  -S /data/3310/mysql.sock -e "CHANGE MASTER TO MASTER_HOST='192.168.32.201', MASTER_PORT=3308, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';"
mysql  -S /data/3310/mysql.sock -e "start slave;"
mysql  -S /data/3310/mysql.sock -e "show slave status\G"

3. Testing

mysql -S /data/3307/mysql.sock -e "show slave status\G"|grep Yes
mysql -S /data/3308/mysql.sock -e "show slave status\G"|grep Yes
mysql -S /data/3309/mysql.sock -e "show slave status\G"|grep Yes
mysql -S /data/3310/mysql.sock -e "show slave status\G"|grep Yes

db01

[root@db01 ~]# mysql -S /data/3307/mysql.sock -e "show slave status\G"|grep Yes
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
[root@db01 ~]# mysql -S /data/3308/mysql.sock -e "show slave status\G"|grep Yes
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
[root@db01 ~]# mysql -S /data/3309/mysql.sock -e "show slave status\G"|grep Yes
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
[root@db01 ~]# mysql -S /data/3310/mysql.sock -e "show slave status\G"|grep Yes
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

db02

[root@db02 ~]# mysql -S /data/3307/mysql.sock -e "show slave status\G"|grep Yes
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
[root@db02 ~]# mysql -S /data/3308/mysql.sock -e "show slave status\G"|grep Yes
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
[root@db02 ~]# mysql -S /data/3309/mysql.sock -e "show slave status\G"|grep Yes
            Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
[root@db02 ~]# mysql -S /data/3310/mysql.sock -e "show slave status\G"|grep Yes
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

5, Deployment of mycat

5.1mycat Download

Mycat Official website of http://www.mycat.org.cn/  
https://github.com/MyCATApache/Mycat-download 
Experimental version: Mycat-server-1.6.7.1-release-20190627191042-linux.tar.gz

5.2 pre installed Java runtime environment

[root@node3 app]# yum install -y java

5.3 decompress the software and configure environment variables

 
[root@node3 app]# cd /app
#Upload mycat software
[root@node3 app]# tar -xf Mycat-server-1.6.7.1-release-20190627191042-linux.tar.gz 
[root@node3 app]# cd mycat/
[root@node3 mycat]# ll
total 12
drwxr-xr-x 2 root root  190 Dec  9 05:26 bin
drwxrwxrwx 2 root root    6 Jun 24  2019 catlet
drwxrwxrwx 4 root root 4096 Dec  9 05:26 conf
drwxr-xr-x 2 root root 4096 Dec  9 05:26 lib
drwxrwxrwx 2 root root    6 Jun 26  2019 logs
-rwxrwxrwx 1 root root  227 Jun 27  2019 version.txt
[root@node3 mycat]# echo 'export PATH=/app/mycat/bin:$PATH' >> /etc/profile
[root@node3 mycat]# source /etc/profile

Profile introduction

 
logs catalogue:
wrapper.log       ---->mycat start log
mycat.log         ---->mycat Detailed work log
conf catalogue:
schema.xml      
Master configuration file (read / write separation, high availability, distributed policy customization, node control)
server.xml
mycat Configuration related to the software itself
rule.xml 
Fragment rule configuration file,Record the list of fragmentation rules, usage methods, etc

5.5 startup and connection

Start mycat

 
[root@node3 mycat]# mycat start
Starting Mycat-server...
[root@node3 mycat]# ps -ef |grep mycat
root       48553       1  0 05:29 ?        00:00:00 /app/mycat/bin/./wrapper-linux-x86-64 /app/mycat/conf/wrapper.conf wrapper.syslog.ident=mycat wrapper.pidfile=/app/mycat/logs/mycat.pid wrapper.daemonize=TRUE wrapper.lockfile=/var/lock/subsys/mycat
root       48555   48553 12 05:29 ?        00:00:02 java -DMYCAT_HOME=. -server -XX:MaxPermSize=64M -XX:+AggressiveOpts -XX:MaxDirectMemorySize=2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=1984 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Xmx4G -Xms1G -Djava.library.path=lib -classpath lib/wrapper.jar:conf:lib/asm-4.0.jar:lib/commons-collections-3.2.1.jar:lib/commons-lang-2.6.jar:lib/curator-client-2.11.0.jar:lib/curator-framework-2.11.0.jar:lib/curator-recipes-2.11.0.jar:lib/disruptor-3.3.4.jar:lib/dom4j-1.6.1.jar:lib/druid-1.0.26.jar:lib/ehcache-core-2.6.11.jar:lib/fastjson-1.2.12.jar:lib/guava-19.0.jar:lib/hamcrest-core-1.3.jar:lib/hamcrest-library-1.3.jar:lib/jline-0.9.94.jar:lib/joda-time-2.9.3.jar:lib/jsr305-2.0.3.jar:lib/kryo-2.10.jar:lib/leveldb-0.7.jar:lib/leveldb-api-0.7.jar:lib/libwrapper-linux-ppc-64.so:lib/libwrapper-linux-x86-32.so:lib/libwrapper-linux-x86-64.so:lib/log4j-1.2-api-2.5.jar:lib/log4j-1.2.17.jar:lib/log4j-api-2.5.jar:lib/log4j-core-2.5.jar:lib/log4j-slf4j-impl-2.5.jar:lib/mapdb-1.0.7.jar:lib/minlog-1.2.jar:lib/mongo-java-driver-2.11.4.jar:lib/Mycat-server-1.6.7.1-release.jar:lib/mysql-binlog-connector-java-0.16.1.jar:lib/mysql-connector-java-5.1.35.jar:lib/netty-3.7.0.Final.jar:lib/netty-buffer-4.1.9.Final.jar:lib/netty-common-4.1.9.Final.jar:lib/objenesis-1.2.jar:lib/reflectasm-1.03.jar:lib/sequoiadb-driver-1.12.jar:lib/slf4j-api-1.6.1.jar:lib/univocity-parsers-2.2.1.jar:lib/velocity-1.7.jar:lib/wrapper.jar:lib/zookeeper-3.4.6.jar -Dwrapper.key=eTK9vQYBISYnKubv -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.pid=48553 -Dwrapper.version=3.2.3 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 org.tanukisoftware.wrapper.WrapperSimpleApp io.mycat.MycatStartup start
root       48606   48307  0 05:29 pts/0    00:00:00 grep --color=auto mycat

Link mycat

[root@node3 app]# mysql -uroot -p123456 -h127.0.0.1 -P8066
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.29-mycat-1.6.7.1-release-20190627191042 MyCat Server (OpenCloudDB)

Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+----------+
| DATABASE |
+----------+
| TESTDB   |
+----------+
1 row in set (0.02 sec)

#The default password of mycat is 123456 and the port is 8066

6, mycat test

6.1 test environment preparation

1. db01 user creation and test database import

Because of the master-slave relationship, the data on db01 will be automatically copied to the databases of db02 and other master-slave relationships

 
with world.sql For test database

db01:
mysql -S /data/3307/mysql.sock 
grant all on *.* to root@'192.168.32.%' identified by '123';
source /tmp/world.sql

mysql -S /data/3308/mysql.sock 
grant all on *.* to root@'192.168.32.%' identified by '123';
source /tmp/world.sql

2. mycat profile configuration

node3

[root@node3 ~]#cd /app/mycat/conf
[root@node3 ~]#mv schema.xml schema.xml.bak
[root@node3 ~]#cat >> schema.xml <<EOF 
<?xml version="1.0"?>  
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">  
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1"> 
</schema>  
    <dataNode name="dn1" dataHost="ywx1" database= "wordpress" />  
    <dataHost name="ywx1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1"> 
        <heartbeat>select user()</heartbeat>  
    <writeHost host="db1" url="192.168.32.201:3307" user="root" password="123"> 
            <readHost host="db2" url="192.168.32.201:3309" user="root" password="123" /> 
    </writeHost> 
    </dataHost>  
</mycat:schema>
EOF

6.2mycat configuration file description

1. Define logical library name and data node name

 
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1"> 
</schema>  

#name="TESTDB"The name of the logical library configured for, dataNode="dn1"Is the name of the data node

2. Define data nodes: datanode and associated back-end real database

<dataNode name="dn1" dataHost="ywx1" database= "world" />  

#Define data host name dataHost="ywx1",And back-end real database database="world"

3. Data host: datahost(w write and r read)

 
<dataHost name="ywx1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1"> 
        <heartbeat>select user()</heartbeat>  
    <writeHost host="db1" url="192.168.32.201:3307" user="root" password="123"> 
            <readHost host="db2" url="192.168.32.201:3309" user="root" password="123" /> 
    </writeHost> 
    </dataHost>  
    
  #db1 data host 192168.32.201:3307 Write library
  #db2 Data host 192168.32.202:3309 Read library for

6.3 introduction to attribute parameters in configuration

1. balance attribute (default)

There are three types of load balancing: 
1. balance="0", Do not enable the read-write separation mechanism, and all read operations are sent to the currently available writeHost Come on. 
2. balance="1",all-out readHost And standby writeHost participate in select Statement load balancing, in short,
  When dual master and dual slave mode(M1->S1,M2->S2,also M1 And M2 Mutual primary and standby),Normally, M2,S1,S2 All involved select Load balancing of statements. 
3. balance="2",All read operations are random writeHost,readhost Distributed on.

2. writeType property (default)

There are two types of load balancing: 
1. writeType="0", All writes are sent to the first configured writeHost,
The first one hung up and cut to the second one that survived writeHost,After restart, the switched master is recorded in the configuration file:dnindex.properties . 
2. writeType="1",All writes are randomly sent to the configured writeHost,Not recommended

3. switchType attribute (default)

-1 Indicates no automatic switching 
1 Default, auto switch 
2 be based on MySQL The state of master-slave synchronization determines whether to switch. The heartbeat statement is show slave status 

4. datahost other configurations

<dataHost name="ywx1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1"> 

maxCon="1000": Maximum number of concurrent connections
minCon="10" : mycat The number of connection threads that will be automatically opened on the backend node after startup
tempReadHostAvailable="1"
One master and one slave (1) writehost,1 individual readhost This parameter can be turned on if there are 2 writehost,2 individual readhost Time
<heartbeat>select user()</heartbeat>  Monitor heartbeat

7, Read write separation configuration of mycat

 
node3

[root@node3 ~]# cd /app/mycat/conf/
[root@node3 ~]#vim schema.xml 

<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">  
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1"> 
</schema>  
        <dataNode name="dn1" dataHost="ywx1" database= "world" />         
        <dataHost name="ywx1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">    
                <heartbeat>select user()</heartbeat>  
        <writeHost host="db1" url="192.168.32.201:3307" user="root" password="123"> 
                        <readHost host="db2" url="192.168.32.201:3309" user="root" password="123" /> 
        </writeHost> 
        </dataHost>  
</mycat:schema>

restart mycat
[root@node3 ~]#mycat restart

Read write separation test
[root@node3 conf]# mysql -uroot -p -h 127.0.0.1 -P8066
Enter password: 

#The query operation is 192168.32.201:3309,server_id=9
 mysql> show variables like 'server_id';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 9     |
+---------------+-------+
1 row in set (0.02 sec)

Query OK, 0 rows affected (0.01 sec)

#The analog write operation is 192168.32.201:3307,server_id=7
 
 mysql> begin;show variables like 'server_id';commit;

+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 7     |
+---------------+-------+
1 row in set (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

The above case realizes the read-write separation function of 1 master and 1 slave. The write operation falls to the master library and the read operation falls to the slave library If the master library goes down, the slave library cannot continue to provide services.

 
#Close the main library 192168.32.201:3307
#db01
[root@db01 tmp]# systemctl stop mysqld3307
[root@db01 tmp]# ss -antlp|grep 33
LISTEN     0      80          :::3308                    :::*                   users:(("mysqld",pid=18329,fd=22))
LISTEN     0      80          :::3309                    :::*                   users:(("mysqld",pid=18364,fd=22))
LISTEN     0      80          :::3310   

#Test query information on mycat node3
[root@node3 conf]# mysql -uroot -p -h 127.0.0.1 -P8066
Enter password: 
mysql> show variables like 'server_id';
ERROR 1184 (HY000): java.net.ConnectException: Connection refused
mysql> 
#Unable to connect to the database

#Restart the main library 192168.32.201:3307
#db01
[root@db01 tmp]# systemctl start mysqld3307

#Read and write on mycat
#node3
[root@node3 conf]# mysql -uroot -p -h 127.0.0.1 -P8066
Enter password: 
mysql>  show variables like 'server_id';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 9     |
+---------------+-------+
1 row in set (0.01 sec)

mysql> begin; show variables like 'server_id';commit;
Query OK, 0 rows affected (0.00 sec)

+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 7     |
+---------------+-------+
1 row in set (0.01 sec)

Query OK, 0 rows affected (0.00 sec)
#Reading and writing returned to normal
#It is verified that if the master database goes down, the slave database cannot continue to provide services.

8, Configure read / write separation and high availability

In order to solve the downtime of the master library, the slave library cannot continue to provide services. We introduce the standby writeHost database.

We used db02:192.168.32.202:3307 in our experiment

The premise is that db01=192.168.32.201:3307 and db02=192.168.32.202:3307 are master-slave to each other

Configuration of mycat

node3

[root@node3 conf]# cat /app/mycat/conf/schema.xml
<?xml version="1.0"?>  
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">  
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1"> 
</schema>  
    <dataNode name="dn1" dataHost="ywx1" database= "world" />  
    <dataHost name="ywx1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1"> 
        <heartbeat>select user()</heartbeat>  
    <writeHost host="db1" url="192.168.32.201:3307" user="root" password="123"> 
            <readHost host="db2" url="192.168.32.201:3309" user="root" password="123" /> 
    </writeHost> 
    <writeHost host="db3" url="192.168.32.202:3307" user="root" password="123"> 
            <readHost host="db4" url="192.168.32.202:3309" user="root" password="123" /> 
    </writeHost>        
    </dataHost>  
</mycat:schema>

#<writeHost host="db3" url="192.168.32.202:3307" user="root" password="123"> 
#            <readHost host="db4" url="192.168.32.202:3309" user="root" password="123" /> 
#    </writeHost>
#Defined standby writeHost; Where writeHost=192.168.32.202:3307   readHost=192.168.32.202:3309
#Where writehost db1 defined in the configuration file=192.168.32.201:3307 And writehost db1=192.168.32.202:3307 Mutual master-slave

test

 
[root@node3 conf]# mycat restart
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...

[root@node3 conf]# mysql -uroot -p -h 127.0.0.1 -P8066
Enter password: 
#Write operation
mysql> begin;select @@server_id;commit;
Query OK, 0 rows affected (0.00 sec)

+-------------+
| @@server_id |
+-------------+
|           7 |
+-------------+
1 row in set (0.06 sec)

Query OK, 0 rows affected (0.00 sec)

#Read operation
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|          19 |
+-------------+
1 row in set (0.00 sec)

mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|          17 |
+-------------+
1 row in set (0.00 sec)

mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|           9 |
+-------------+
1 row in set (0.00 sec)
#Read operations are distributed in 192168.32.201:3309(server_id=9),192.168.32.202:3307(server_id=17),192.168.32.202:3309(server_id=19)Come on.
#Because before the main library is down
#Real writehost=db1(192.168.32.201:3307): Responsible for writing writehost  
#standby  writeHost=db3(192.168.32.202:3307)  : and readhost Like, only reading service is provided

Assuming that the main database goes down at 192.168.32.201:3307, the write operation falls on 192.168.32.202:3307 and the read operation is only 192.168.32.202:3309

#db01
[root@db01 tmp]# systemctl stop mysqld3307

#node3
#mycat test
[root@node3 conf]# mysql -uroot -p -h 127.0.0.1 -P8066
Enter password: 
#The read operation will only fall at 192168.32.202:3309(server_id=19)
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|          19 |
+-------------+
1 row in set (0.00 sec)

#Write operation falls at 192168.32.202:3307(server_id=17)upper
mysql> begin; show variables like 'server_id';commit;
Query OK, 0 rows affected (0.00 sec)

+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 17    |
+---------------+-------+
1 row in set (0.01 sec)

Query OK, 0 rows affected (0.00 sec)

192.168.32.201:3307 when the main database is restored, it will automatically become a standby writeHost, which is still written on 192.168.32.202:3307(server_id=17);

Read as distributed on 192.168.32.202:3309(server_id=19),192.168.32.201:3307(server_id=7),192.168.32.201:3309(server_id=9)

 
#db01
[root@db01 tmp]# systemctl start mysqld3307

#node3
[root@node3 conf]# mysql -uroot -p -h 127.0.0.1 -P8066
Enter password:
#Write operation
mysql> begin; show variables like 'server_id';commit;
Query OK, 0 rows affected (0.00 sec)

+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 17    |
+---------------+-------+
1 row in set (0.01 sec)

Query OK, 0 rows affected (0.00 sec)

#Read operation
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|          19 |
+-------------+
1 row in set (0.00 sec)

mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|           9 |
+-------------+
1 row in set (0.00 sec)

mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|           7 |
+-------------+
1 row in set (0.00 sec)

9, Vertical sub table

9.1 introduction of vertical sub table

A database consists of many tables, and each table corresponds to different businesses. Vertical segmentation refers to classifying tables according to businesses and distributing them to different databases, so as to share data or pressure to different databases.

The system is divided into several modules: user, order transaction and payment. The overall function of an application system with good architecture design must be composed of many functional modules, and the data required by each functional module corresponds to one or more tables in the database. In the architecture design, the more unified the interaction points between each functional module, the less the coupling degree of the system, and the better the maintainability and expansibility of each module of the system. In such a system, the easier it is to realize the vertical segmentation of data. However, it is often difficult for some tables of the system to be completely independent. There is a case of database expansion join. For such tables, we need to balance whether the database gives way to the business, shares one data source, or is divided into multiple libraries, and the business is called through the interface. In the initial stage of the system, when the amount of data is relatively small or the resources are limited, the common data source will be selected. However, when the data develops to a certain scale and the load is large, it needs to be divided. Generally speaking, it is difficult to segment the business with complex join scenarios, and the business is often independent and easy to segment. How to segment and to what extent is a difficult problem to test the technical architecture.

Advantages and disadvantages of lower vertical segmentation:

advantage:

• Clear business and clear splitting rules after splitting; 

• Easy integration or expansion between systems

• Data maintenance is simple.

Disadvantages:

 • Some business tables cannot be join,It can only be solved by interface, which improves the complexity of the system; 

• Due to the different restrictions of each business, there is a single database performance bottleneck, which is not easy to expand data and improve performance; 

• Complex transaction processing

9.2 configuration description of vertical sub table

Configuration file of vertical table

node3
[root@node3 conf]#cd /app/mycat/conf
[root@node3 conf]#vim schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1">
        <table name="user" dataNode="sh1"/>
        <table name="order_t" dataNode="sh2"/>
</schema>
    <dataNode name="sh1" dataHost="ywx1" database= "taobao" />
    <dataNode name="sh2" dataHost="ywx2" database= "taobao" />
    <dataHost name="ywx1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3307" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3309" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3307" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3309" user="root" password="123" />
    </writeHost>
    </dataHost>
    <dataHost name="ywx2" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3308" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3310" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3308" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3310" user="root" password="123" />
    </writeHost>
    </dataHost>
</mycat:schema>

9.3 description of configuration file

<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1">
        <table name="user" dataNode="sh1"/>
        <table name="order_t" dataNode="sh2"/>

Put the user table on the sh1 data node and click order_ Table t is placed on the sh2 data node, and other tables are placed on the sh1 data node by default

 

</schema>
    <dataNode name="sh1" dataHost="ywx1" database= "taobao" />
    <dataNode name="sh2" dataHost="ywx2" database= "taobao" />

Define the data host names of data nodes sh1 and sh2 as ywx1 and ywx2, and associate them with the real database taobao on the back end

 

<dataHost name="ywx1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3307" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3309" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3307" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3309" user="root" password="123" />
    </writeHost>
    </dataHost>
#Define the read-write information of the data host ywx1    
    
    <dataHost name="ywx2" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3308" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3310" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3308" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3310" user="root" password="123" />
    </writeHost>
#Define the read-write information of the data host ywx2    

9.4 testing

Create test libraries and tables:
#db01
[root@db01 conf]# mysql -S /data/3307/mysql.sock -e "create database taobao charset utf8;"
[root@db01 conf]# mysql -S /data/3308/mysql.sock -e "create database taobao charset utf8;"
[root@db01 conf]# mysql -S /data/3307/mysql.sock -e "use taobao;create table user(id int,name varchar(20))";
[root@db01 conf]# mysql -S /data/3308/mysql.sock -e "use taobao;create table order_t(id int,name varchar(20))"

#View on mycat
#node3
[root@node3 conf]# mycat restart
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...
[root@node3 conf]# mysql -uroot -p -h 127.0.0.1 -P8066
Enter password: 
mysql> show databases;
+----------+
| DATABASE |
+----------+
| TESTDB   |
+----------+
1 row in set (0.00 sec)

mysql> use TESTDB;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+------------------+
| Tables_in_taobao |
+------------------+
| order_t          |
| user             |
+------------------+
2 rows in set (0.01 sec)
#In mycat, you can see the information of all tables in all taobao libraries

#You can only see order on sh1_ Table information of T
#db01 192.168.32.201:3307
[root@db01 tmp]# mysql -S /data/3307/mysql.sock -e "use taobao;show tables;"
+------------------+
| Tables_in_taobao |
+------------------+
| user             |
+------------------+

#You can only see order on sh2_ Table information of T
#db01 192.168.32.201:3308
[root@db01 tmp]# mysql -S /data/3308/mysql.sock -e "use taobao;show tables;"
+------------------+
| Tables_in_taobao |
+------------------+
| order_t          |
+------------------+

10, MyCAT core feature - slicing (horizontal split)

10.1 introduction to horizontal split

Because vertical segmentation is to distribute tables to different databases according to the classification of business, some business tables will be too large, and there are bottlenecks in single database reading, writing and storage, so it needs to be solved by horizontal splitting.

Compared with vertical splitting, horizontal splitting does not classify tables, but distributes them into multiple databases according to certain rules of a certain field, and each table contains some data. In short, we can understand the horizontal segmentation of data as the segmentation of data rows, that is, some rows in the table are segmented into one database, while others are segmented into other databases.

 

To split data, you need to define fragmentation rules. Relational database is a two-dimensional model of rows and columns. The first principle of splitting is to find the split dimension. For example, from the perspective of members, if the merchant order transaction system queries a member's order on a day and a month, it needs to be split according to the member combination date, and different data are grouped according to the member ID, so that all data query join s will be solved in the single database; From the perspective of merchants, if you want to query the number of orders of a certain merchant on a certain day, you need to split them according to the merchant ID; However, if the system wants to split according to both members and business data, there will be some difficulties. How to find an appropriate segmentation rule needs comprehensive consideration and measurement.

Several typical fragmentation rules include:

• By user ID For modeling, the data is scattered to different databases, and the data of users with the same data are scattered to one database;
• According to the date, the data of different months or even days are scattered into different databases;
• According to a specific field, or according to a specific range segment, it is scattered into different libraries.

The principle of segmentation is to find suitable segmentation rules according to the business and distribute them to different libraries. The following example is to calculate the module with user ID:

 

Since the data is split, there are advantages and disadvantages.

advantage:

• Good abstraction of splitting rules,join The database operation can be basically done;
• There is no performance bottleneck of single database, big data and high concurrency;
• Less transformation of application end;
• The stability and load capacity of the system are improved.

Disadvantages:

• Splitting rules are difficult to abstract;
• The consistency of fragmented transactions is difficult to solve;
• It is difficult to expand the data for many times and has a great amount of maintenance;
• Cross Library join Poor performance.

We talked about the differences, advantages and disadvantages of vertical segmentation and horizontal segmentation. We will find that each segmentation method has its own disadvantages, but the common characteristics and disadvantages are:

• The problem of introducing distributed transactions;
• Cross node Join The problem of;
• Cross node merge sort paging problem;
• Multi data source management.

There are two main ideas for data source management:

A. Client mode: configure and manage one (or more) data sources in each application module, directly access each database, and complete data integration in the module;

B. All data sources are managed uniformly through the intermediate agent layer, and the back-end database cluster is transparent to the front-end applications;

When more than 90% of people are inclined to solve the above two kinds of systems, especially when they are constantly complex. Indeed, this is a very correct choice. Although the cost may be relatively greater in the short term, it is very helpful for the scalability of the whole system.

Mycat solves the defects of traditional database through data segmentation, and has the advantage that NoSQL is easy to expand. Through the intermediate agent layer, the processing problem of multiple data sources is avoided, the application is completely transparent, and the problems existing after data segmentation are also solved.

Due to the difficulty of data Join after data segmentation, I also share the experience of data segmentation:

The first principle: try not to segment if you can.

The second principle: if you want to segment, you must choose the appropriate segmentation rules and plan in advance.

The third principle: data segmentation try to reduce the possibility of cross database Join through data redundancy or Table Group. The fourth principle: because the database middleware is difficult to grasp the advantages and disadvantages of the implementation of data Join, and it is very difficult to achieve high performance, the business reading should use multi table Join as little as possible.

10.2 function of horizontal split

Slice: to one"bigtable",for instance t3 surface

(1)Very many lines, 800w
(2)Very frequent visits

Purpose of slicing:
(1)Distribute and store large amounts of data
(2)Provide balanced access routing

10.3 slicing strategy

Fragmentation strategy:
Range range  800w  1-400w 400w01-800w
Take mold mod    Remainder
 enumeration 
Hash hash 
Time flow

Optimize associated queries
 Global table
ER Slice

1. range

For example, t3 table

(1) Very many rows, 2000W (1-1000w: sh1, 1000w01-2000w: SH2)

(2) Access is very frequent and user access is discrete

Range slicing mycat configuration file

node3

[root@node3 logs]# cd /app/mycat/conf
[root@node3 conf]# vim schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">

<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1"> 
        <table name="t3" dataNode="sh1,sh2" rule="auto-sharding-long" />
</schema>  
    <dataNode name="sh1" dataHost="ywx1" database= "taobao" /> 
    <dataNode name="sh2" dataHost="ywx2" database= "taobao" />  
    <dataHost name="ywx1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3307" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3309" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3307" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3309" user="root" password="123" />
    </writeHost>
    </dataHost>
    <dataHost name="ywx2" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3308" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3310" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3308" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3310" user="root" password="123" />
    </writeHost>
    </dataHost>
</mycat:schema>


####################################################################
#<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1"> 
#       <table name="t3" dataNode="sh1,sh2" rule="auto-sharding-long" />
#The t3 table is allocated to sh1 and sh2 data nodes in the way of range fragmentation; Other tables are on the sh1 partition by default.
#rule="auto-sharding-long"   auto-sharding-long by range rule
    

In rule View the rule definition of range in the XML configuration file

 
node3

[root@node3 conf]#cd /app/mycat/conf
[root@node3 conf]# vim rule.xm
<tableRule name="auto-sharding-long">
                <rule>
                        <columns>id</columns>
                        <algorithm>rang-long</algorithm>
                </rule>             
<function name="rang-long"
    class="io.mycat.route.function.AutoPartitionByLong">
    <property name="mapFile">autopartition-long.txt</property>
</function>

#<columns>id</columns> Defined by id Column to slice
#<algorithm>rang-long</algorithm> call rang-log function
#<function name="rang-long" see rang-log Function information
#    class="io.mycat.route.function.AutoPartitionByLong">
#    <property name="mapFile">autopartition-long.txt</property>
#</function>
#rang-log Function Information Association autopartition-long.txt No script can be created on your own conf In the directory

Configure autopartition long Txt function

 
[root@node3 conf]# vim autopartition-long.txt 
# range start-end ,data node index
# K=1000,M=10000.
#0-500M=0
#500M-1000M=1
#1000M-1500M=2
#Default fit
0-10=0
11-20=1
#0 indicates the fragmentation of sh1
#1 indicates the slice of sh2
#id:0-10 Assigned to sh1 On the slice of
#id:11-20 Assigned to sh1 On the slice of

test

db01
 To create a test table:
mysql -S /data/3307/mysql.sock -e "use taobao;create table t3 (id int not null primary key auto_increment,name varchar(20) not null);"

mysql -S /data/3308/mysql.sock  -e "use taobao;create table t3 (id int not null primary key auto_increment,name varchar(20) not null);"


restart mycat
mycat restart
 stay mycat Insert data on
[root@node3 conf]# mysql -uroot -p -h 127.0.0.1 -P 8066
password:
mysql> use TESTDB;
mysql> insert into t3(id,name) values(1,'a');
mysql> insert into t3(id,name) values(2,'b');
mysql> insert into t3(id,name) values(3,'c');
mysql> insert into t3(id,name) values(4,'d');
mysql> insert into t3(id,name) values(11,'aa');
mysql> insert into t3(id,name) values(12,'bb');
mysql> insert into t3(id,name) values(13,'cc');
mysql> insert into t3(id,name) values(14,'dd');
mysql> select * from t3;
+----+------+
| id | name |
+----+------+
|  1 | a    |
|  2 | b    |
|  3 | c    |
|  4 | d    |
| 11 | aa   |
| 12 | bb   |
| 13 | cc   |
| 14 | dd   |
+----+------+
8 rows in set (0.05 sec)
#What you can see on mycat is the data of the whole table

#db02
#When viewing the t3 table information on sh1, you can only see the id 1-4 Information
192.168.32.202:3307
[root@db02 ~]# mysql -S /data/3307/mysql.sock  -e "use taobao;select * from t3;"
+----+------+
| id | name |
+----+------+
|  1 | a    |
|  2 | b    |
|  3 | c    |
|  4 | d    |
+----+------+
#When viewing the t3 table information on sh2, you can only see the id 11-14 Information
192.168.32.202:3308
[root@db02 ~]# mysql -S /data/3308/mysql.sock  -e "use taobao;select * from t3;"
+----+------+
| id | name |
+----+------+
| 11 | aa   |
| 12 | bb   |
| 13 | cc   |
| 14 | dd   |
+----+------+

2. Mod long

Residual partition method: the partition key (one column) and the number of nodes are used to obtain the residual, and the data is written to the corresponding node.

mycat configuration file

node3

[root@node3 conf]#cd /app/mycat/conf
[root@node3 conf]# vim schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">

<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1">
        <table name="t4" dataNode="sh1,sh2" rule="mod-long" />
</schema>
    <dataNode name="sh1" dataHost="ywx1" database= "taobao" />
    <dataNode name="sh2" dataHost="ywx2" database= "taobao" />
    <dataHost name="ywx1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3307" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3309" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3307" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3309" user="root" password="123" />
    </writeHost>
    </dataHost>
    <dataHost name="ywx2" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3308" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3310" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3308" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3310" user="root" password="123" />
    </writeHost>
    </dataHost>
</mycat:schema>

#<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1">
#        <table name="t4" dataNode="sh1,sh2" rule="mod-log" />
#Divide t4 into sh1 and SH2 according to the mode of taking mold
#Other tables are placed on sh1 by default
#mod-log Represents the modulus rule

In rule View the rule definition of mod long in the XML configuration file

[root@node3 conf]# vim rule.xml
<tableRule name="mod-long">
      <rule>
          <columns>id</columns>
          <algorithm>mod-long</algorithm>
      </rule>
</tableRule>

#<columns>id</columns> according to id Column to module
#<algorithm>mod-long</algorithm> call mod-long function


<function name="mod-long" class="io.mycat.route.function.PartitionByMod">
             <!-- how many data nodes -->
             <property name="count">2</property>
</function>

#<property name="count">2</property> Indicates how many data nodes there are. We only have sh1 and sh2 Two, fill in 2
#columns identifies the table field to be sliced, and algorithm slicing function. In the configuration of slicing function, mapFile identifies the name of the configuration file

test

#To create a test table:
#db02

[root@db02 ~]#mysql -S /data/3307/mysql.sock -e "use taobao;create table t4 (id int not null primary key auto_increment,name varchar(20) not null);"
[root@db02 ~]#mysql -S /data/3308/mysql.sock -e "use taobao;create table t4 (id int not null primary key auto_increment,name varchar(20) not null);"

#Restart mycat 
#node3

[root@node3 ~]#mycat restart 

Test: 
[root@node3 ~]#mysql -uroot -p123456 -h127.0.0.1 -P8066

mysql> use TESTDB
mysql> insert into t4(id,name) values(1,'a');
mysql> insert into t4(id,name) values(2,'b');
mysql> insert into t4(id,name) values(3,'c');
mysql> insert into t4(id,name) values(4,'d');

mysql> select * from t4;
+----+------+
| id | name |
+----+------+
|  2 | b    |
|  4 | d    |
|  1 | a    |
|  3 | c    |
+----+------+
4 rows in set (0.10 sec)


#Log in to the backend node to query the data
#db02

[root@db02 ~]# mysql -S /data/3308/mysql.sock -e "use taobao;select * from t4;"
+----+------+
| id | name |
+----+------+
|  1 | a    |
|  3 | c    |
+----+------+
[root@db02 ~]# mysql -S /data/3307/mysql.sock -e "use taobao;select * from t4;"
+----+------+
| id | name |
+----+------+
|  2 | b    |
|  4 | d    |
+----+------+

3. Enumeration fragmentation

 
t5 surface
id name telnum
1   bj   1212
2   sh   22222
3   bj   3333
4   sh   44444
5   bj   5555

mycat configuration file

 
[root@node3 ~]# cd /app/mycat/conf
[root@node3 conf]# vim schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">

<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1">
        <table name="t5" dataNode="sh1,sh2" rule="sharding-by-intfile" />
</schema>
    <dataNode name="sh1" dataHost="ywx1" database= "taobao" />
    <dataNode name="sh2" dataHost="ywx2" database= "taobao" />
    <dataHost name="ywx1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3307" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3309" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3307" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3309" user="root" password="123" />
    </writeHost>
    </dataHost>
    <dataHost name="ywx2" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3308" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3310" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3308" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3310" user="root" password="123" />
    </writeHost>
    </dataHost>
</mycat:schema>


#<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1">
#        <table name="t5" dataNode="sh1,sh2" rule="sharding-by-intfile" />
#The t5 table is distributed to sh1 and sh2 by enumeration, and other tables are on sh1 by default
# rule="sharding-by-intfile"  Enumeration rules

In rule View the rule definition of sharding by int file in the XML configuration file

 
<tableRule name="sharding-by-intfile">
                <rule>
                        <columns>name</columns>
                        <algorithm>hash-int</algorithm>
                </rule>

# <columns>name</columns> Which column is used for enumeration,Use here name column
#<algorithm>hash-int</algorithm> call hash-int function

<function name="hash-int"
                class="io.mycat.route.function.PartitionByFileMap">
                <property name="mapFile">partition-hash-int.txt</property>
                    <property name="type">1</property>
                      <property name="defaultNode">0</property>
</function>
#hash-int function call partition-hash-int.txt configuration file
#<property name="type">1</property>
#<property name="defaultNode">0</property>
#Add the above 2 lines manually, and support Chinese characters
#columns identifies the table field to be sliced, and algorithm slicing function. In the configuration of slicing function, mapFile identifies the name of the configuration file

The partition-hash-int.txt configuration file was not created in the mycat conf directory

The default is binary
[root@node3 conf]# vim partition-hash-int.txt 
10000=0
10010=1
Change to
[root@node3 conf]# vim partition-hash-int.txt
bj=0 
sh=1
DEFAULT_NODE=1 
#0 indicates fragment sh1
#1 means split sh2
#name=bj Put on sh1
#name=sh Put on sh2
#Others are placed on the default partition sh1

test

#Prepare test environment
#db02
[root@db02 ~]#mysql -S /data/3307/mysql.sock -e "use taobao;create table t5 (id int not null primary key auto_increment,name varchar(20) not null);"

[root@db02 ~]#mysql -S /data/3308/mysql.sock -e "use taobao;create table t5 (id int not null primary key auto_increment,name varchar(20) not null);"


#Restart mycat 
#node3
[root@node3 ~]#mycat restart 
[root@node3 ~]#mysql -uroot -p123456 -h127.0.0.1 -P8066
mysql>use TESTDB
mysql>insert into t5(id,name) values(1,'bj');
mysql>insert into t5(id,name) values(2,'sh');
mysql>insert into t5(id,name) values(3,'bj');
mysql>insert into t5(id,name) values(4,'sh');
mysql>insert into t5(id,name) values(5,'tj');
mysql> select * from t5;
+----+------+
| id | name |
+----+------+
|  1 | bj   |
|  3 | bj   |
|  5 | tj   |
|  2 | sh   |
|  4 | sh   |
+----+------+
5 rows in set (0.10 sec)


#Log in to the backend node to query the data
#db02

[root@db02 ~]# mysql -S /data/3308/mysql.sock -e "use taobao;select * from t5;"
+----+------+
| id | name |
+----+------+
|  2 | sh   |
|  4 | sh   |
+----+------+
[root@db02 ~]# mysql -S /data/3307/mysql.sock -e "use taobao;select * from t5;"
+----+------+
| id | name |
+----+------+
|  1 | bj   |
|  3 | bj   |
|  5 | tj   |
+----+------+

#tj is placed on the default sh1 partition

4. Mycat global table

 
a   b   c  d   
join 
t 
#Tables a, b, c and d are related to table t
select  t1.name   ,t.x  from  t1 
join t 
select  t2.name   ,t.x  from  t2 
join t 
select  t3.name   ,t.x  from  t3 
join t 

Usage scenario:
If some data in your business is similar to the data dictionary, such as the configuration of the configuration file,
The configuration of common businesses or tables with little change in data volume are often not very large,
And most business scenarios will use this table, so this table is suitable for Mycat Global table, without data segmentation,
To save a copy of data on all slices, Mycat stay Join During the operation, the business table is compared with the global table Join Aggregation will preferentially select global tables in the same partition join,
Avoid cross Library Join,During data insertion, mycat The data will be distributed to all partitions corresponding to the global table for execution. When reading the data, a node will be randomly obtained to read the data. 

mycat configuration file

[root@node3 conf]# vim /app/mycat/conf/schema.xml

<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">

<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1">
        <table name="t_area" primaryKey="id"  type="global" dataNode="sh1,sh2" /> 
</schema>
    <dataNode name="sh1" dataHost="ywx1" database= "taobao" />
    <dataNode name="sh2" dataHost="ywx2" database= "taobao" />
    <dataHost name="ywx1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3307" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3309" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3307" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3309" user="root" password="123" />
    </writeHost>
    </dataHost>
    <dataHost name="ywx2" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3308" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3310" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3308" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3310" user="root" password="123" />
    </writeHost>
    </dataHost>
</mycat:schema>

# <table name="t_area" primaryKey="id"  type="global" dataNode="sh1,sh2" />
#Put t_ The area table, whose primary key column is id, is set as a global table on both shards sh1 and sh2

test

#Backend data preparation
#db02
[root@db02 ~]# mysql -S /data/3307/mysql.sock -e "use taobao;create table t_area (id int not null primary key auto_increment,name varchar(20) not null);"
[root@db02 ~]# mysql -S /data/3308/mysql.sock -e "use taobao;create table t_area (id int not null primary key auto_increment,name varchar(20) not null);"




#Restart mycat 
#node3
[root@node3 conf]#mycat restart 

Test: 
[root@node3 conf]#mysql -uroot -p123456 -h127.0.0.1 -P8066

mysql > use TESTDB
mysql > insert into t_area(id,name) values(1,'a');
mysql > insert into t_area(id,name) values(2,'b');
mysql > insert into t_area(id,name) values(3,'c');
mysql > insert into t_area(id,name) values(4,'d');

5. E-R slice

 
A 
join 
B  
To prevent cross sectioning join,have access to E-R pattern
A   join   B
on  A.xx=B.yy
join C
on A.id=C.id

<table name="A" dataNode="sh1,sh2" rule="mod-long"> 
       <childTable name="B" joinKey="yy" parentKey="xx" /> 
       <childTable name="C" joinKey="id" parentKey="id" />
</table> 
#<table name="A" dataNode="sh1,sh2" rule="mod-long"> Parent table A information
#<childTable name="B" joinKey="yy" parentKey="xx" /> Sub table B Information, association: A.xx=B.yy
#<childTable name="C" joinKey="id" parentKey="id" /> Sub table C Information, association: A.id=C.id

mycat configuration

 
[root@node3 conf]# vim /app/mycat/conf/schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">

<schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100" dataNode="sh1">
        <table name="A" dataNode="sh1,sh2" rule="mod-long"> 
           <childTable name="B" joinKey="yy" parentKey="xx" /> 
           <childTable name="C" joinKey="id" parentKey="id" />
</schema>
    <dataNode name="sh1" dataHost="ywx1" database= "taobao" />
    <dataNode name="sh2" dataHost="ywx2" database= "taobao" />
    <dataHost name="ywx1" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3307" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3309" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3307" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3309" user="root" password="123" />
    </writeHost>
    </dataHost>
    <dataHost name="ywx2" maxCon="1000" minCon="10" balance="1"  writeType="0" dbType="mysql"  dbDriver="native" switchType="1">
        <heartbeat>select user()</heartbeat>
    <writeHost host="db1" url="192.168.32.201:3308" user="root" password="123">
            <readHost host="db2" url="192.168.32.201:3310" user="root" password="123" />
    </writeHost>
    <writeHost host="db3" url="192.168.32.202:3308" user="root" password="123">
            <readHost host="db4" url="192.168.32.202:3310" user="root" password="123" />
    </writeHost>
    </dataHost>
</mycat:schema>

 

Tags: MySQL

Posted by versatilewt on Mon, 02 May 2022 19:49:08 +0300