1. Preparation
① A built zookeeper cluster is required
② Pull activemq image
docker pull webcenter/activemq
③ Explain
host | Zookeeper cluster port | AMQ cluster bind port | AMQ message tcp port | Management console port |
192.168.16.106 | 2181 | tcp://0.0.0.0:63631 | 61616 | 8161 |
192.168.16.106 | 2182 | tcp://0.0.0.0:63632 | 61617 | 8162 |
192.168.16.106 | 2183 | tcp://0.0.0.0:63633 | 61618 | 8263 |
2.docker starts three activemq containers
docker run -d --name activemq_01 -p 61616:61616 -p 8161:8161 webcenter/activemq docker run -d --name activemq_02 -p 61617:61616 -p 8162:8161 webcenter/activemq docker run -d --name activemq_03 -p 61618:61616 -p 8163:8161 webcenter/activemq
3.hostname name mapping (if not, just change the hostname of the mq configuration file to the current host ip)
vim /etc/hosts #Add the following 192.168.16.106 hostname
4.ActiveMQ cluster configuration
① The BrokerName in the configuration file should be consistent
#Enter container instance docker exec -it activemq_01 /bin/bash #Modify profile cd conf vim activemq.xml #Change the brokerName in the three activemq configuration files to the same name to prevent confusion <broker xmlns="http://activemq.apache.org/schema/core" brokerName="activemq_cluster" dataDirectory="${activemq.data}">
② Persistent configuration
Find the persistenceAdapter node, comment out kahaDB and add the following contents. Among them, only the contents of bind are different, and the rest are the same.
Directory: generated directory
Replicas: number of ActiveMQ clusters
bind: cluster communication port
Zkaddress: the address of zookeeper
hostname: host name
sync: synchronize to local disk
Zkpathh: ActiveMQ registers the node path in zookeeper
#activemq_01 <persistenceAdapter> <replicatedLevelDB directory="${activemq.data}/leveldb" replicas="3" bind="tcp://0.0.0.0:63631" zkAddress="192.168.16.106:2181,192.168.16.106:2182,192.168.16.106:2183" hostname="cyfuse" sync="local_disk" zkPath="/activemq/leveldb-stores" /> </persistenceAdapter> #activemq_02 <persistenceAdapter> <replicatedLevelDB directory="${activemq.data}/leveldb" replicas="3" bind="tcp://0.0.0.0:63632" zkAddress="192.168.16.106:2181,192.168.16.106:2182,192.168.16.106:2183" hostname="cyfuse" sync="local_disk" zkPath="/activemq/leveldb-stores" /> </persistenceAdapter> #activemq_03 <persistenceAdapter> <replicatedLevelDB directory="${activemq.data}/leveldb" replicas="3" bind="tcp://0.0.0.0:63633" zkAddress="192.168.16.106:2181,192.168.16.106:2182,192.168.16.106:2183" hostname="cyfuse" sync="local_disk" zkPath="/activemq/leveldb-stores" /> </persistenceAdapter>
③ Modify the message port of each node (the port mapped externally when creating the container)
#activemq_01 <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/> #activemq_02 <transportConnector name="openwire" uri="tcp://0.0.0.0:61617?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/> #activemq_03 <transportConnector name="openwire" uri="tcp://0.0.0.0:61618?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
5. Restart the activemq cluster
① Start the zookeeper cluster first
docker start zookeeper1 zookeeper2 zookeeper3
② Start activemq cluster
docker restart activemq_01 activemq_02 activemq_03
6. Viewing the zookeeper node
There are three active MQ nodes under discovery
View the content of the node, in which the Master is s elected and the other two are Slave
{"id":"localhost","container":null,"address":"tcp://cyfuse:63631","position":-1,"weight":1,"elected":"0000000000"} {"id":"localhost","container":null,"address":null,"position":-1,"weight":1,"elected":null} {"id":"localhost","container":null,"address":null,"position":-1,"weight":1,"elected":null}
7. Cluster availability test
The ActiveMQ client can only access the Broker of the Master, but other brokers in the Slave cannot. Therefore, the Broker connected by the client should use the failover protocol (fail over)
When an ActiveMQ node hangs or a zookeeper node hangs, the ActiveMQ service still runs normally. If there is only one ActiveMQ node left, ActiveMQ cannot run normally because the Master cannot be elected. Similarly, if zookeeper has only one active node, ActiveMQ cannot provide services normally regardless of the survival of ActiveMQ nodes. (the high availability of ActiveMQ cluster depends on the high availability of zookeeper cluster)
Producer code
public class JmsProduce { public static final String ACTIVEMQ_URL = "failover:(tcp://192.168.16.106:61616,tcp://192.168.16.106:61617,tcp://192.168.16.106:61618)"; public static final String QUEUE_NAME = "queue_cluster"; public static void main(String[] args) throws JMSException { // 1. Create a connection factory and use the default user name and password according to the given url address ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(ACTIVEMQ_URL); // 2. Get the connection and start it Connection connection = factory.createConnection(); connection.start(); //3. Create a session //Two parameters: ① transaction ② sign in Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); //4. Create destination (specifically queue or topic) Queue queue = session.createQueue(QUEUE_NAME); //5. Create the producer of the message MessageProducer messageProducer = session.createProducer(queue); //Set the Queue message produced by the producer created through the session as persistent messageProducer.setDeliveryMode(DeliveryMode.PERSISTENT); //6. Generate three queues of messages sent to MQ by using messageProducer for(int i=0;i<3;i++){ //7. Create message //text type TextMessage textMessage = session.createTextMessage("mag---" + i); //8. Send to mq through messageProducer messageProducer.send(textMessage); } //9. Close resources messageProducer.close(); session.close(); connection.close(); System.out.println("---Publish message to mq---"); } }
Consumer code
public static final String ACTIVEMQ_URL = "failover:(tcp://192.168.16.106:61616,tcp://192.168.16.106:61617,tcp://192.168.16.106:61618)"; public static final String QUEUE_NAME = "queue_cluster"; public static void main(String[] args) throws JMSException, IOException { ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(ACTIVEMQ_URL); Connection connection = factory.createConnection(); connection.start(); //3. Create a session //Two parameters: ① transaction ② sign in Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); //4. Create destination (specifically queue or topic) Queue queue = session.createQueue(QUEUE_NAME); //5. Create consumers MessageConsumer messageConsumer = session.createConsumer(queue); messageConsumer.setMessageListener(new MessageListener() { @Override public void onMessage(Message message) { if(null != message && message instanceof TextMessage){ TextMessage textMessage = (TextMessage) message; try { System.out.println("Received text Message:"+(textMessage.getText())); } catch (JMSException e) { e.printStackTrace(); } } } }); System.in.read();//If the console is not destroyed or added, the message will be processed before the program ends messageConsumer.close(); session.close(); connection.close(); }
Start the producer, and the console output is as follows, indicating that the connection is successful
INFO | Successfully connected to tcp://192.168.16.106:61616 ---Publish message to mq---
Stop the master of activemq cluster and check whether a new host can be selected
docker stop activemq_01
Since the official has abandoned this cluster method, although a new master can be selected, the producer and consumer can not be connected. We can only see who is the master in zookeeper