20 lines of code: image classification and prediction can be easily done in Python under Serverless architecture

Author Jiang Yu

preface

Image classification is a hot topic in the field of artificial intelligence. Popular interpretation is an image processing method that distinguishes different types of targets according to the different characteristics reflected in the image information.

It uses computer to quantitatively analyze the image, and classifies each pixel or region in the image into one of several categories to replace human visual interpretation.

Image classification is often encountered in actual production and life, and has strong pertinence to different fields or needs. For example, flower information is recognized by photographing flowers, and person information is compared by face.

Usually, these image recognition or classification tools collect data at the client and calculate at the server to obtain the results, that is, generally, there are special API s to realize image recognition. For example, major cloud manufacturers will provide us with similar capabilities for a fee:

Alibaba cloud image recognition page:

Huawei cloud image recognition page:

Through an interesting Python library, this paper will quickly build the image classification function on the cloud function, and combine it with the API gateway to provide API functions externally, so as to realize a * * "image classification API" of Serverless architecture.

First of all, let's introduce the required dependency Library: ImageAI. Through the official documents of this dependency, we can see the following description:

ImageAI is a python library designed to enable developers to use a few simple lines of code to build applications and systems with deep learning and computer vision capabilities.
Based on the principle of simplicity, ImageAI supports the most advanced machine learning algorithms for image prediction, user-defined image prediction, object detection, video detection, video object tracking and image prediction training. ImageAI currently supports image prediction and training using four different machine learning algorithms trained on ImageNet-1000 data sets. ImageAI also supports object detection, video detection and object tracking using RetinaNet trained on COCO data sets. Finally, ImageAI will provide more extensive and professional support for computer vision, including but not limited to image recognition in special environments and special fields.

In other words, this dependency library can help us complete basic image recognition and video target extraction. Although it gives some data sets and models, we can also carry out additional training and customized expansion according to our own needs. Through the official code, we can see a simple Demo:

# -*- coding: utf-8 -*-
from imageai.Prediction import ImagePrediction

# Model loading
prediction = ImagePrediction()
prediction.setModelTypeAsResNet()
prediction.setModelPath("resnet50_weights_tf_dim_ordering_tf_kernels.h5")
prediction.loadModel()

predictions, probabilities = prediction.predictImage("./picture.jpg", result_count=5 )
for eachPrediction, eachProbability in zip(predictions, probabilities):
    print(str(eachPrediction) + " : " + str(eachProbability))

When we specify the picture Jpg picture is:

Our results after implementation are:

laptop : 71.43893241882324
notebook : 16.265612840652466
modem : 4.899394512176514
hard_disc : 4.007557779550552
mouse : 1.2981942854821682

If you feel that the model resnet50_weights_tf_dim_ordering_tf_kernels.h5 is too large and takes too long. You can select the model according to your needs:

  • SqueezeNet (file size: 4.82 MB, with the shortest prediction time and moderate accuracy)
  • ResNet50 by Microsoft Research (file size: 98 MB, fast prediction time and high accuracy)
  • Inception V3 by Google brain team (file size: 91.6 MB, slow prediction time and higher accuracy)
  • DenseNet121 by Facebook AI Research (file size: 31.6 MB, slow prediction time, highest accuracy)

The model download address can refer to Github address:
https://github.com/OlafenwaMoses/ImageAI/releases/tag/1.0

Or refer to the official ImageAI documentation:
https://imageai-cn.readthedocs.io/zh_CN/latest/ImageAI_Image_Prediction.html

Project Serverless

Write the entry method and initialize the project according to the requirements of function calculation. At the same time, create the folder model under the current project and copy the model file to this folder:

Overall process of the project:

Implementation code:

# -*- coding: utf-8 -*-

from imageai.Prediction import ImagePrediction
import json
import uuid
import base64
import random


# Response
class Response:
    def __init__(self, start_response, response, errorCode=None):
        self.start = start_response
        responseBody = {
            'Error': {"Code": errorCode, "Message": response},
        } if errorCode else {
            'Response': response
        }
        # uuid is added by default to facilitate later positioning
        responseBody['ResponseId'] = str(uuid.uuid1())
        print("Response: ", json.dumps(responseBody))
        self.response = json.dumps(responseBody)

    def __iter__(self):
        status = '200'
        response_headers = [('Content-type', 'application/json; charset=UTF-8')]
        self.start(status, response_headers)
        yield self.response.encode("utf-8")


# Random string
randomStr = lambda num=5: "".join(random.sample('abcdefghijklmnopqrstuvwxyz', num))

# Model loading
print("Init model")
prediction = ImagePrediction()
prediction.setModelTypeAsResNet()
print("Load model")
prediction.setModelPath("/mnt/auto/model/resnet50_weights_tf_dim_ordering_tf_kernels.h5")
prediction.loadModel()
print("Load complete")


def handler(environ, start_response):
    try:
        request_body_size = int(environ.get('CONTENT_LENGTH', 0))
    except (ValueError):
        request_body_size = 0
    requestBody = json.loads(environ['wsgi.input'].read(request_body_size).decode("utf-8"))

    # Image acquisition
    print("Get pucture")
    imageName = randomStr(10)
    imageData = base64.b64decode(requestBody["image"])
    imagePath = "/tmp/" + imageName
    with open(imagePath, 'wb') as f:
        f.write(imageData)

    # Content prediction
    print("Predicting ... ")
    result = {}
    predictions, probabilities = prediction.predictImage(imagePath, result_count=5)
    print(zip(predictions, probabilities))
    for eachPrediction, eachProbability in zip(predictions, probabilities):
        result[str(eachPrediction)] = str(eachProbability)

    return Response(start_response, result)

Required dependencies:

tensorflow==1.13.1
numpy==1.19.4
scipy==1.5.4
opencv-python==4.4.0.46
pillow==8.0.1
matplotlib==3.3.3
h5py==3.1.0
keras==2.4.3
imageai==2.1.5

Write the configuration file required for deployment:

ServerlessBookImageAIDemo:
  Component: fc
  Provider: alibaba
  Access: release
  Properties:
    Region: cn-beijing
    Service:
      Name: ServerlessBook
      Description: Serverless Book case
      Log: Auto
      Nas: Auto
    Function:
      Name: serverless_imageAI
      Description: Image target detection
      CodeUri:
        Src: ./src
        Excludes:
          - src/.fun
          - src/model
      Handler: index.handler
      Environment:
        - Key: PYTHONUSERBASE
          Value: /mnt/auto/.fun/python
      MemorySize: 3072
      Runtime: python3
      Timeout: 60
      Triggers:
        - Name: ImageAI
          Type: HTTP
          Parameters:
            AuthType: ANONYMOUS
            Methods:
              - GET
              - POST
              - PUT
            Domains:
              - Domain: Auto

In the code and configuration, you can see the existence of directory: / mnt/auto /. This part is actually the address after nas is mounted. You only need to write it into the code in advance. The next step will be the creation of nas and the specific operation of mount point configuration.

Project deployment and testing

After completing the above steps, you can:

s deploy

Deploy the project. After the deployment, you can see the results:

After deployment, you can:

s install docker

Perform dependent installation:

After the dependency installation is completed, you can see that it is generated in the directory The directory of fun, which is the dependency file packaged through docker. These dependencies are exactly what we found in requirements Txt file.

After completion, we passed:

s nas sync ./src/.fun

Package and upload the dependent directory to nas, and then package and upload the model directory after success:

s nas sync ./src/model

After completion, you can:

s nas ls --all

View catalog details:

After completion, we can write a script to test, which is also applicable to the test picture just now. Through the code:

import json
import urllib.request
import base64
import time

with open("picture.jpg", 'rb') as f:
    data = base64.b64encode(f.read()).decode()

url = 'http://35685264-1295939377467795.test.functioncompute.com/'

timeStart = time.time()
print(urllib.request.urlopen(urllib.request.Request(
    url=url,
    data=json.dumps({'image': data}).encode("utf-8")
)).read().decode("utf-8"))
print("Time: ", time.time() - timeStart)

You can see the results:

{"Response": {"laptop": "71.43893837928772", "notebook": "16.265614330768585", "modem": "4.899385944008827", "hard_disc": "4.007565602660179", "mouse": "1.2981869280338287"}, "ResponseId": "1d74ae7e-298a-11eb-8374-024215000701"}
Time:  29.16020894050598

It can be seen that the function calculation successfully returned the expected results, but the overall time-consuming was more than expected, nearly 30s. At this time, let's execute the test script again:

{"Response": {"laptop": "71.43893837928772", "notebook": "16.265614330768585", "modem": "4.899385944008827", "hard_disc": "4.007565602660179", "mouse": "1.2981869280338287"}, "ResponseId": "4b8be48a-298a-11eb-ba97-024215000501"}
Time:  1.1511380672454834

It can be seen that the execution time is only 1.15 seconds, up 28 seconds from the last time.

Project optimization

In the last round of test, we can see the time difference between the first start and the second start of the project. In fact, this time difference is mainly due to the extremely long time wasted when the function loads the model.

Even locally, we can simply test:

# -*- coding: utf-8 -*-

import time

timeStart = time.time()

# Model loading
from imageai.Prediction import ImagePrediction

prediction = ImagePrediction()
prediction.setModelTypeAsResNet()
prediction.setModelPath("resnet50_weights_tf_dim_ordering_tf_kernels.h5")
prediction.loadModel()
print("Load Time: ", time.time() - timeStart)
timeStart = time.time()

predictions, probabilities = prediction.predictImage("./picture.jpg", result_count=5)
for eachPrediction, eachProbability in zip(predictions, probabilities):
    print(str(eachPrediction) + " : " + str(eachProbability))
print("Predict Time: ", time.time() - timeStart)

Execution result:

Load Time:  5.549695014953613
laptop : 71.43893241882324
notebook : 16.265612840652466
modem : 4.899394512176514
hard_disc : 4.007557779550552
mouse : 1.2981942854821682
Predict Time:  0.8137111663818359

It can be seen that in the process of loading the imageAI module and loading the model file, it takes a total of 5.5 seconds, and less than 1 second in the prediction part. In function calculation, the performance of the machine itself is not as high as my local performance. At this time, in order to avoid the long response time caused by each model loading, you can see that the model loading process is actually placed outside the entry method in the deployed code. One advantage of this is that each time the project is executed, there will not necessarily be a cold start, that is, some objects can be reused on the premise of some reuse, that is, there is no need to reload the model and import dependencies every time.

Therefore, in the actual project, in order to avoid frequent requests, instance repeated loading and creation of some resources, we can put some resources during initialization. In this way, the overall performance of the project can be greatly improved. At the same time, with the reserved capacity provided by the manufacturer, the negative impact of function cold start can be basically eliminated.

summary

In recent years, artificial intelligence and cloud computing have developed by leaps and bounds. In Serverless architecture, how to run traditional artificial intelligence projects has gradually become something that many people need to know. This paper mainly introduces the implementation of an interface for image classification and prediction through an existing dependency Library (ImageAI). With this example, there are actually several things that can be clarified:

  • Serverless architecture can run AI related projects;
  • Serverless can be well compatible with Tensorflow and other machine learning / deep learning tools;
  • Although the function calculation itself has space limitations, in fact, after the hard disk mounting capacity is increased, the function calculation itself will be greatly expanded.

Of course, this article can also be regarded as a brick to attract jade. I hope readers can give play to their imagination and further combine more AI projects with Serverless architecture after this article.

Tags: Machine Learning AI Deep Learning Cloud Native serverless

Posted by newbeee on Thu, 28 Apr 2022 00:00:39 +0300