[performance] performance comparison: REST vs gRPC vs asynchronous communication

The mode of communication within the micro service has a significant impact on the quality of information within the micro service. Communication methods will affect the functional requirements such as software performance and efficiency, as well as non functional requirements such as variability, scalability and maintainability. Therefore, it is necessary to consider all the advantages and disadvantages of different methods in order to reasonably select the correct way of communication in specific use cases. This article compares the following styles: REST, gRPC and asynchronous communication using message broker (RabbitMQ) to understand their impact on software performance in microservice network. Some of the most important attributes of communication (which in turn affect overall performance) are:

  • Data transmission format
  • Connection processing
  • Message serialization
  • cache
  • load balancing

Data transmission format

Although asynchronous communication and gRPC communication using AMQP protocol (Advanced message queuing protocol) use binary protocol for data transmission, REST-API usually transmits data in text format. Compared with text-based protocols, binary protocols are much more efficient [1,2]. Therefore, communication using gRPC and AMQP will lead to lower network load, while higher network load can be expected when using REST API.

Connection processing

REST-API is usually based on HTTP/1.1 protocol, while gRPC depends on the use of HTTP/2 protocol. HTTP/1.1, HTTP/2 and AMQP all use TCP at the transport layer to ensure a stable connection. To establish such a connection, detailed communication between the client and the server is required. These performance effects also apply to all communication methods. However, for AMQP or HTTP/2 connection, the initial establishment of communication connection only needs to be performed once, because the requests of both protocols can be multiplexed. This means that existing connections can be reused for subsequent requests using asynchronous or gRPC communication. On the other hand, the REST-API of HTTP/1.1 is used to establish a new connection for each request to the remote server.

Necessary communication to establish a TCP-Connection

Message serialization

Typically, before transmitting messages over the network, JSON is used to perform REST and asynchronous communication for message serialization. On the other hand, gRPC transfers data in the protocol buffer format by default. The protocol buffer improves communication speed by allowing more advanced serialization and deserialization methods to encode and use message content [1]. However, it is up to the engineer to choose the correct message serialization format. In terms of performance, protocol buffers have many advantages, but when it is necessary to debug the communication between microservices, relying on human readable JSON format may be a better choice.

cache

An effective caching strategy can significantly reduce the load of the server and the necessary computing resources. Due to its architecture, REST-API is the only way to allow effective caching. REST-API responses can be cached and replicated by other servers and caching agents, such as Varnish. This reduces the load on REST services and allows processing large amounts of HTTP traffic [1]. However, this is only possible if more services (caching agents) are deployed on the infrastructure or third-party integration is used. Neither the gRPC official document nor the RabbitMQ document describes any form of caching.

load balancing

In addition to temporarily storing responses, there are other technologies that can improve service speed. Load balancers (such as mod_proxy) can distribute HTTP traffic between services in an efficient and transparent way [1]. This enables horizontal extension of services using rest APIs. As a container orchestration solution, Kubernetes can load balance HTTP/1.1 traffic without any adjustment. On the other hand, for gRPC, you need to provide another service (linkerd) on the network [3]. Asynchronous communication can support load balancing without further help. The message broker itself acts as a load balancer because it can distribute requests to multiple instances of the same service. Message brokers have been optimized for this purpose, and their design has taken into account the fact that they must be particularly scalable [1].

experiment

In order to evaluate the impact of various communication methods on software quality characteristics, four microservices are developed to simulate the order scenario of e-commerce platform.

Microservices are deployed on a self hosted Kubernetes cluster composed of three different servers. The servers are connected through a Gigabit (1000 Mbit/s) network and located in the same data center. The average delay between servers is 0.15 milliseconds. Each time the experiment runs, each service is deployed on the same server. This behavior is achieved through pod affinity. All micro services are implemented in GO programming language. The actual business logic of individual services, such as the communication with the database, is deliberately not implemented in order not to be affected by other than the selected communication method. Therefore, the collected results can not represent this type of microservice architecture, but can make the communication methods in the experiment comparable. Instead, the implementation of business logic is simulated by delaying the program flow by 100 milliseconds. Therefore, in communication, the total delay is 400 milliseconds. Open source software k6 is used to implement load testing.

realization

The net/http module included in the Golang standard library is used to provide a REST interface. The encoding/json module also included in the standard library is used to serialize and deserialize the request. All requests use the HTTP POST method. "Conversation is cheap. Show me the password."

package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io/ioutil"
    "log"
    "net/http"

    "github.com/google/uuid"
    "gitlab.com/timbastin/bachelorarbeit/common"
    "gitlab.com/timbastin/bachelorarbeit/config"
)

type restServer struct {
    httpClient http.Client
}

func (server *restServer) handler(res http.ResponseWriter, req *http.Request) {
    // only allow post request.
    if req.Method != http.MethodPost {
        bytes, _ := json.Marshal(map[string]string{
            "error": "invalid request method",
        })
        http.Error(res, string(bytes), http.StatusBadRequest)
        return
    }

    reqId := uuid.NewString()

    // STEP 1 / 4
    log.Println("(REST) received new order", reqId)

    var submitOrderDTO common.SubmitOrderRequestDTO

    b, _ := ioutil.ReadAll(req.Body)

    err := json.Unmarshal(b, &submitOrderDTO)
    if err != nil {
        log.Fatalf(err.Error())
    }

    checkIfInStock(1)

    invoiceRequest, _ := http.NewRequest(http.MethodPost, 
    fmt.Sprintf("%s/invoices", config.MustGet("customerservice.rest.address").
     (string)), bytes.NewReader(b))
    // STEP 2
    r, err := server.httpClient.Do(invoiceRequest)
    // just close the response body
    r.Body.Close()
    if err != nil {
        panic(err)
    }

    shippingRequest, _ := http.NewRequest(http.MethodPost, 
    fmt.Sprintf("%s/shipping-jobs", config.MustGet("shippingservice.rest.address").
     (string)), bytes.NewReader(b))

    // STEP 3
    r, err = server.httpClient.Do(shippingRequest)
    // just close the response body
    r.Body.Close()
    if err != nil {
        panic(err)
    }

    handleProductDecrement(1)
    // STEP 5
    res.WriteHeader(201)
    res.Write(common.NewJsonResponse(map[string]string{
        "state": "success",
    }))
}

func startRestServer() {
    server := restServer{
        httpClient: http.Client{},
    }
    http.HandleFunc("/orders", server.handler)
    done := make(chan int)
    go http.ListenAndServe(config.MustGet("orderservice.rest.port").(string), nil)
    log.Println("started rest server")
    <-done
}
copy

RabbitMQ message broker is used for asynchronous communication and is deployed on the same Kubernetes cluster. The communication between message broker and micro services uses GitHub COM / spreadway / AMQP library. The library is recommended by the official documentation of GO programming language.

package main

import (
    "encoding/json"
    "log"

    "github.com/streadway/amqp"
    "gitlab.com/timbastin/bachelorarbeit/common"
    "gitlab.com/timbastin/bachelorarbeit/config"
    "gitlab.com/timbastin/bachelorarbeit/utils"
)

func handleMsg(message amqp.Delivery, ch *amqp.Channel) {
    log.Println("(AMQP) received new order")
    var submitOrderRequest common.SubmitOrderRequestDTO
    err := json.Unmarshal(message.Body, &submitOrderRequest)
    utils.FailOnError(err, "could not unmarshal message")

    checkIfInStock(1)

    handleProductDecrement(1)
    ch.Publish(config.MustGet("amqp.billingRequestExchangeName").(string), "", 
     false, false, amqp.Publishing{
        ContentType: "application/json",
        Body:        message.Body,
    })

}

func getNewOrderChannel(conn *amqp.Connection) (*amqp.Channel, string) {
    ch, err := conn.Channel()
    utils.FailOnError(err, "could not create channel")

    ch.ExchangeDeclare(config.MustGet("amqp.newOrderExchangeName").
    (string), "fanout", false, false, false, false, nil)

    queue, err := ch.QueueDeclare(config.MustGet("orderservice.amqp.consumerName").
    (string), false, false, false, false, nil)

    utils.FailOnError(err, "could not create queue")

    ch.QueueBind(queue.Name, "", config.MustGet("amqp.newOrderExchangeName").
    (string), false, nil)
    return ch, queue.Name
}

func startAmqpServer() {
    conn := common.NewAmqpConnection(config.MustGet("amqp.host").(string))
    defer conn.Close()

    orderChannel, queueName := getNewOrderChannel(conn)

    msgs, err := orderChannel.Consume(
        queueName,
        config.MustGet("orderservice.amqp.consumerName").(string),
        true,
        false,
        false,
        false,
        nil,
    )

    utils.FailOnError(err, "could not consume")

    forever := make(chan bool)
    log.Println("started amqp server:", queueName)
    go func() {
        for d := range msgs {
            go handleMsg(d, orderChannel)
        }
    }()
    <-forever
}
copy

gRPC clients and servers use Google. Com recommended by gRPC documentation golang. Org / gRPC library. Data serialization is done using protocol buffers.

package main

import (
    "log"
    "net"

    "context"

    "gitlab.com/timbastin/bachelorarbeit/common"
    "gitlab.com/timbastin/bachelorarbeit/config"
    "gitlab.com/timbastin/bachelorarbeit/pb"
    "gitlab.com/timbastin/bachelorarbeit/utils"
    "google.golang.org/grpc"
)

type OrderServiceServer struct {
    CustomerService pb.CustomerServiceClient
    ShippingService pb.ShippingServiceClient
    pb.UnimplementedOrderServiceServer
}

func (s *OrderServiceServer) SubmitOrder(ctx context.Context, 
    request *pb.SubmitOrderRequest) (*pb.SuccessReply, error) {
    log.Println("(GRPC) received new order")
    if s.CustomerService == nil {
        s.CustomerService, _ = common.NewCustomerServiceClient()
    }
    if s.ShippingService == nil {
        s.ShippingService, _ = common.NewShippingServiceClient()
    }

    checkIfInStock(1)

    // call the product service on each iteration to decrement the product.
    _, err := s.CustomerService.CreateAndProcessBilling(ctx, &pb.BillingRequest{
        BillingInformation: request.BillingInformation,
        Products:           request.Products,
    })

    utils.FailOnError(err, "could not process billing")

    // trigger the shipping job.
    _, err = s.ShippingService.CreateShippingJob(ctx, &pb.ShippingJob{
        BillingInformation: request.BillingInformation,
        Products:           request.Products,
    })

    utils.FailOnError(err, "could not create shipping job")

    handleProductDecrement(1)

    return &pb.SuccessReply{Success: true}, nil
}

func startGrpcServer() {
    listen, err := net.Listen("tcp", config.MustGet("orderservice.grpc.port").(string))
    if err != nil {
        log.Fatalf("could not listen: %v", err)
    }

    grpcServer := grpc.NewServer()

    orderService := OrderServiceServer{}
    // inject the clients into the server
    pb.RegisterOrderServiceServer(grpcServer, &orderService)

    // start the server
    log.Println("started grpc server")
    if err := grpcServer.Serve(listen); err != nil {
        log.Fatalf("could not start grpc server: %v", err)
    }
}
copy

collecting data

Check the number of successful and failed orders processed to confirm how long they have elapsed. If the duration until confirmation exceeds 900 milliseconds, the order process is interpreted as a failure. This duration is chosen because there may be an infinite waiting time in the experiment, especially when asynchronous communication is used. Each test will report the number of failed and successful orders. A total of 12 different measurements have been made for each architecture. In each case, the number of simultaneous requests is different, and the amount of data transmitted is also different. First, test each communication mode under low load, then under medium load, and finally under high load. 10 low load simulations, 100 medium load simulations and 300 high load simulations send requests to the system at the same time. After these six test runs, the amount of data to be transmitted will increase to understand the efficiency of the serialization method of each interface. The increase in data volume is achieved by ordering multiple products.

result

gRPC API architecture is the communication method with the best performance studied in the experiment. Under low load, it can accept 3.41 times more orders than the system using REST interface. In addition, the average response time is 9.71 MS lower than REST-API and 9.37 MS lower than AMQP-API.

Posted by lady_bug on Sat, 07 May 2022 04:42:06 +0300