nginx deploys front-end projects while doing load balancing and forwarding requests to multiple back-end services

I don't understand you hitting me.

Foreword: In fact, the configuration is very simple, but the idea has not worked out and took a little detour. Record it here, I hope you don’t step on the pit, this article is for you.

Scenario: One SLB is used as the front-end service for load balancing and forwarding, and two ECS s are used as project deployment machines, respectively deploying front-end and back-end projects. Machine A deploys the front-end as well as the back-end, and machine B is the same. SLB is used for front-end forwarding. (Note: slb can add different ports to do other forwarding configurations, but we don't use it here)

Requirement: When you need to request the A machine, you need to distribute the load to the A backend or the B backend, and when you request the B machine, load the service of the A machine or the backend service of the B machine.



I drew a general request flow chart of the project. You don't need to look at the slb in the figure. We mainly look at the nginx configuration.

Note: We need to configure a specific identifier on the front-end project ip, that is, for example, add /api to the interface of the request back-end, or other /testApi... Anything can be done, as long as it is clear, the best naming convention)

Then the ip I deployed is:, and the front-end is packaged and deployed to the A machine.

Change the ip to:, then package the front end and deploy it to the B machine.

In this way, the A machine or the B machine can be used for load balancing and forwarding of back-end requests every time.







Next, configure nginx, the configuration of the two is the same

server {
    listen       80;
    server_name  localhost;
  #This is the configuration front-end project address location / { root /etc/nginx/html/dist; index index.html index.htm;     try_files $uri /index.html; #This is used to prevent 404 from refreshing the page } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; }   #Here is the configuration load balancing location ^~/api {
      #Add request headers or something. You can see the content of these headers configured by yourself through the response header through F12. I will comment it here for the time being. # add_header Access
-Control-Allow-Origin *; # add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS'; # add_header Access-Control-Allow-Headers 'Cache-Control,Content-Type,Authorization,id_token'; if ($request_method = 'OPTIONS') { return 204; } # When the front end calls the interface, there are/api of url will be forwarded to the backend(loadbalance for the above upstream the name after the keyword) proxy_pass http://loadbalance; rewrite ^/api/(.*) /$1 break; #Notice: Need to rewrite here/api empty because of this/api It is the logo we wrote in the front-end project and added it manually
} } #Add two forwarding addresses here upstream loadbalance { server weight=1; #Weight is used to configure the weight, that is, the proportion of the number of times nginx distributes requests here server weight=1; }


Summary: 1. You need to add a specific logo for forwarding identification at the front-end access to the background interface. 2. You need to rewrite this logo and clear it

Ending: It is important to note that you must manually configure this identifier in the front-end project, and then request the port monitored by nginx, and use nginx as the load forwarding interface! I didn't pay attention at first, and directly assigned it to the 9000 port on the back end, digging a hole for myself.


Tags: Nginx

Posted by monicka87 on Sat, 14 May 2022 01:12:48 +0300