A few lines of code to easily realize the task of target detection and image classification -- building an enterprise machine learning micro service based on Spring Boot and DJL

Now there seems to be a trend in the field of AI applications. Most learning applications are in Python. Spring Boot is an open source platform widely used in the field of microservices. Users can use Stock API to create an RPC API based on C + + or Python applications to realize reasoning tasks in different languages. However, this will bring a lot of maintenance cost and efficiency problems - just for RPC communication, the time consumption of communication alone has reached the time consumption of reasoning, which has become the bottleneck of improving the overall application speed. Now, AWS cloud service has launched an open source Java library based on deep learning, deep Java library (djl), which simplifies the cumbersome development process and brings better and more stable solutions to the majority of developers.

preface

Many users of AWS cloud services, whether start-ups or large companies, are gradually carrying machine learning (ML) and deep learning (DL) tasks on their existing products. A large number of machine learning tasks are applied in commercial and industrial fields, such as target detection in images and videos, emotion analysis in documents, and fraud identification in big data. Although machine learning is widely used in other languages (such as Python), the cost of learning and integration for developers of existing products is still high. Imagine that if you want to integrate existing Java services in another language, you have to make a lot of changes from writing code, compiling, testing to final deployment. In order to solve the pain points of users in this regard, this article will propose a new idea to solve the problem: users can directly deploy machine learning applications in existing services without redeploying existing resources and personnel.

Application of Spring Boot in production environment

Spring Boot is an open source platform widely used in the field of micro services. Its main feature is to simplify the process of distributed system distribution management application. However, at present, there are only a few options for users to deploy ML applications. Take the reasoning application for example. Users can use the Stock API to create an RPC API based on C + + or Python applications to realize reasoning tasks in different languages. Although this solution can solve the urgent need of deployment in the shortest time, from the effect of long-term operation, it has caused a lot of maintenance costs and efficiency problems. In terms of RPC communication alone, the time consumption of communication alone may have reached the time consumption of reasoning, resulting in this solution becoming the bottleneck of improving the overall application speed.

In order to better solve the plight of developers, AWS cloud service launched an open source Java library based on deep learning, DeepJavaLibrary (DJL) . DJL's service purpose is to simplify the expensive and cumbersome development process and bring better and more stable solutions to the majority of developers. This article will start from a basic Spring boot application, using Deep Java Library (DJL) To integrate machine learning applications in microservices. With only a few lines of code, the task of target detection and image classification can be easily realized.

Configure Spring Boot Starter (SBS)

Spring Boot Starter is a one-stop spring library management tool. It simplifies many operations needed to reference new libraries, such as copying and pasting sample code, modifying configuration files and so on. Please refer to Spring Boot Starter Official Guide For more information. In our teaching today, we will use DJL Spring Boot Starter , an SBS with deep learning deployment function. Based on the existing architecture, DJL SBS adds Auto configuration The function of. It allows users to treat dependencies as beans in a few lines of code without worrying about them. If any of the following steps are not very clear, you can refer to us Example application.

Dependency management

DJL library can be applied to various operating system platforms, and also supports a variety of deep learning engines, such as TensorFlow 2.0, PyTorch and MXNet. DJL has built-in a series of automatic selection mechanisms, so users do not need to choose the operating system to run. However, DJL still requires users to select one or more deep learning engines. Taking MXNet as an example, the user can select the following configuration (pom.xml):

<parent>
  <artifactId>spring-boot-starter-parent</artifactId>
  <groupId>org.springframework.boot</groupId>
  <version>2.2.6.RELEASE</version>
</parent>

<properties>
  <java.version>11</java.version> <!-- 11 yes Java Minimum supported version -->
  <jna.version>5.3.0</jna.version> <!-- Override required JNA edition-->
</properties> 

<dependency>
  <groupId>ai.djl.spring</groupId>
  <artifactId>djl-spring-boot-starter-mxnet-linux-x86_64</artifactId>
  <version>${djl.starter.version}</version> <!-- e.g. 0.2 -->
</dependency>

Users can choose the platform they need to run. In < dependency > above, linux-x86_64 (Linux) can be replaced with win-x86_64 (Windows), or osx-x86_64 (Mac OS).

We also provide a fully automatic dependency auto that automatically finds the corresponding system at run time:
`<dependency>
<groupId>ai.djl.spring</groupId>
<artifactId>djl-spring-boot-starter-mxnet-auto</artifactId>
<version>${djl.starter.version}</version> <!-- e.g. 0.2 -->
</dependency>`

If you need to use PyTorch, you can make the following changes:

<dependency>
  <groupId>ai.djl.spring</groupId>
  <artifactId>djl-spring-boot-starter-pytorch-auto</artifactId>
  <version>${djl.starter.version}</version> <!-- e.g. 0.2 and above -->
</dependency>

The configuration of Gradle is also very similar. Only the following lines are required:

plugins {
  ...
  id("org.springframework.boot")
}
repositories {
  mavenCentral() // The published package is in maven central
}

dependencies {
  implementation("ai.djl.spring:djl-spring-boot-starter-mxnet-auto:0.2")
}

Note that since SpringBoot itself uses an older version of the JNA library, we need to manually set gradle "jna.version=5.3.0" in properties.

Use Spring automatic selection function

Next, we can use Spring's auto configuration to realize the automatic selection function by adding the following dependencies:

<dependency>
  <groupId>ai.djl.spring</groupId>
  <artifactId>djl-spring-boot-starter-autoconfigure</artifactId>
  <version>${djl.starter.version}</version>
</dependency>

In gradle:

dependencies {
  implementation("ai.djl.spring:djl-spring-boot-starter-autoconfigure:${djl.starter.version}")
}

After importing this dependency, Spring Boot will automatically configure the environment and find the model. Developers need to provide a standard Spring configuration file, such as application YML or application properties. You can select any of the following types of models:

 QUESTION_ANSWER(NLP.QUESTION_ANSWER),
  TEXT_CLASSIFICATION(NLP.TEXT_CLASSIFICATION),
  IMAGE_CLASSIFICATION(CV.IMAGE_CLASSIFICATION),
  OBJECT_DETECTION(CV.OBJECT_DETECTION),
  ACTION_RECOGNITION(CV.ACTION_RECOGNITION),
  INSTANCE_SEGMENTATION(CV.INSTANCE_SEGMENTATION),
  POSE_ESTIMATION(CV.POSE_ESTIMATION),
  SEMANTIC_SEGMENTATION(CV.SEMANTIC_SEGMENTATION);

For example, if you want to carry out target detection, you can choose OBJECT_DETECTION. You can refer to the following yaml for configuration:

djl:
    # Set application type
    application-type: OBJECT_DETECTION
    # Set the input data format. Some models support multiple data formats
    input-class: java.awt.image.BufferedImage
    # Set output data format
    output-class: ai.djl.modality.cv.output.DetectedObjects
    # Set a filter to filter your model
    model-filter:
      size: 512
      backbone: mobilenet1.0
    # Overwrite existing I / O configuration
    arguments:
      threshold: 0.5 # Only the predicted result is greater than or equal to 0.5

IDE support

We recommend that users use IDE, such as intelliJ or Eclipse:

Image URL: https://github.com/awslabs/djl-spring-boot-starter-demo/raw/master/docs/media/djl-start-ide-support-low-frame-30s.gif

intelliJ can
Ctrl+Space to auto complete
Ctrl+J to quickly query documents

Run your app

Now let's try the effect of the previous configuration. From now on, we only need two steps to complete all the model deployment and operation tasks. Before that, developers only need to create a simple single class Spring application.

Step 1: inject predictor for target detection

 @Resource 
 private Supplier<Predictor> predictorProvider;

Step 2: run target detection

try (var predictor = predictorProvider.get()) {
    var results = predictor.predict(ImageIO.read(this.getClass()
          .getResourceAsStream("/puppy-in-white-and-red-polka.jpg")));

    for(var result : results.items()) {
        LOG.info("results: {}", result.toString());
    }
}

If you use our example, the following results will be displayed in the console:

a.d.s.e.console.ConsoleApplication: results: class: "dog", probability: 0.90820, bounds: {x=0.487, y=0.057, width=0.425, height=0.484}

Rapid reproduction

You can easily run the sample application with the following lines

git clone git@github.com:awslabs/djl-spring-boot-starter.git
cd djl-spring-boot-starter/djl-spring-boot-console-sample 
../mvnw package 
../mvnw spring-boot:run

We also provide one More complex examples , you can use a variety of plug-ins to quickly implement a Restful classifier microservice.

Understanding DJL


DJL It is an in-depth learning framework specially tailored for Java developers launched by AWS cloud service at the 2019 re:Invent conference, which is now running in millions of reasoning tasks of Amazon. If you want to summarize the main features of DJL, there are three points as follows:

DJL is not limited to the back-end engine: users can easily use MXNet, PyTorch, TensorFlow and fastText to do model training and reasoning on Java.
DJL's operator design is infinitely close to numpy: its use experience is basically seamless with numpy, and switching the engine will not change the result.
DJL's excellent memory management and efficiency mechanism: DJL has its own resource recovery mechanism, and 100 hours of continuous reasoning will not overflow memory.

DJL can now be run on the full Mac/Linux/Windows platform. DJL has the function of self detecting CUDA version, and will automatically use the corresponding CUDA version package to run gpu tasks. For more information, see the following links:

https://djl.ai
https://github.com/awslabs/djl
Welcome to DJL slack Forum.

Tags: Machine Learning AI Deep Learning

Posted by UKlee on Mon, 16 May 2022 07:22:16 +0300