Python solves the problem where gensim loads doc2vec or work2vec vector model files too slowly to access

gensim was used to calculate post vectors and similarities in the project. The model file was trained, but during the running process, it was found that the model loaded very slowly and took about 1-2 minutes. We can't let the user wait that long, so we have to find a way

Think about whether you can package it as an api by loading the model once and then utilizing it, which consumes less and is faster

Look for two scenarios that are preferred by all parties: Django and Flask, both of which are python's web service framework. The difference is that Django is a heavy framework and Flask is a lightweight framework.

Here we try to solve this problem with Flask

Install required dependencies first

pip install Flask

Then I wrote a test code

from flask import Flask
app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello World!"

if __name__ == "__main__":
    app.run()

Run Code

python hello.py
  Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
Running on http://localhost:5000/

There's a warning here that flask needs to be started using WSGI, where the test code can be ignored, and I'm working on this after I've written it

Start with WSGI (this block is final)

pip install Gevent
from gevent.pywsgi import WSGIServer

http_server = WSGIServer(('', 5000), app)
http_server.serve_forever()

Here's the complete code I wrote

# coding=utf-8

import re, json, time, sys, os
import gensim

curPath = os.path.abspath(os.path.dirname(__file__))
rootPath = os.path.split(curPath)[0]
sys.path.append(rootPath)

from setting.default import *
from flask import Flask, request, jsonify
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from gevent.pywsgi import WSGIServer

app = Flask(__name__)
app.config['JSON_AS_ASCII'] = False

@app.route("/")
def index():
    return 'hello word!'

@app.route("/get_content_similar", methods=['GET'])
def get_content_similar():
    # Request Parameter Receive
    words = request.args.get("words")
    # Split into arrays
    seg_list = words.split(',')
    # Prediction Vector
    vector = model_dm.infer_vector(seg_list, steps=5, epochs=70)
    # Extract data
    sims = model_dm.docvecs.most_similar([vector], topn=100)
    post_id_dict = []
    for i in sims:
        post_id_dict.append([i[0], i[1]])
    return jsonify(post_id_dict)

def main():
    # Initialize model
    global model_dm
    model_dm = Doc2Vec.load(WORDS_PATH + 'doc2vec_0619last_300_3_15_paddle', mmap='r')
    print('--------------Initialization model complete--------------')
    # Development Mode Start Service
    # app.run(host='0.0.0.0')

    # WSGI Start Services
    http_server = WSGIServer(('', 5000), app)
    http_server.serve_forever()

if __name__ == "__main__":
    main()

Load the model file at startup, enter it in memory, define two routes, one request parameter, and then process the result back to json. The current request parameter is a comma-separated participle. If you need to participle in the service, you can modify it yourself

Visual Access

http://localhost:5000/get_content_similar?words= Aquaculture, Dragon Fish

Speed up to about 200 ms, the effect is remarkable

 

Reference material

https://dormousehole.readthedocs.io/en/latest/deploying/wsgi-standalone.html#gevent

https://zhuanlan.zhihu.com/p/94124468

https://blog.csdn.net/goddavide/article/details/103712131

 

Author: Old < 393210556@qq.com >The way to solve a problem is to solve it once.

Tags: Python Flask WSGI

Posted by py343 on Tue, 24 May 2022 20:15:37 +0300