Flask is a Web application development framework based on Python. It is very small but has rich functions. Because it is designed as an extensible framework from the beginning of development, it only has a robust core containing basic services, and other functions can be realized through extension.
The two main dependencies of Flask are: routing, debugging and Web server gateway interface provided by Werkzeug; Jinjia2 provides the template system.
I used pycharm in the experiment. Flash can be installed through file - > Settings - > Project: XX - > Python interpreter - > "+"
When using flash, you need to create a program instance to process all requests received from the client.
from flask import Flask app=Flask(__name__)
The next step is to define the route, which is used to deal with the relationship between the url and the function. This function is called the view function. It returns the response to the url, which can be a simple HTML string or a form. Because of the requirements of the job, I used it in the view function
Render in flash_ Template(), which renders the HTML file that has been written, and generates HTML efficiently and flexibly.
#route @app.route("/",methods=['GET', 'POST']) #View function def index(): return render_template("index.html")
There are also requirements for the placement of html and other files. When creating a project, you should create a new static and Templates folder under the project. Static places static files such as pictures and css, and templates places html files.
Finally, you need to start the server
if __name__=='__main__': #Start method app.run(debug=True)
Click the running website to see the experimental results.
The above is my most basic understanding of flash. Next is my experiment:
In this experiment, I linked MySQL database to read CET4 vocabulary and output it in form; And using pytorch for face detection and comparison.
Click the following two links to jump to the corresponding experimental results respectively.
There are too many face feature vectors, so it's a hasty screenshot.
(the specific html will not be put up, which is very simple) the following is the flash program code, including the link use of the database.
from flask import Flask, render_template from face import Face import pymysql import torch from facenet_pytorch import MTCNN, InceptionResnetV1 from torch import device db=pymysql.connect(host='localhost',user='root',password='123456',database='data2022',charset='utf8') #Create instance app=Flask(__name__) #route @app.route("/",methods=['GET', 'POST']) #View function def index(): return render_template("index.html") @app.route('/word',methods=['GET', 'POST']) def word(): with db.cursor() as cursor: sql = 'select * from map_enword;' cursor.execute(sql) result = cursor.fetchall() result = list(result) cursor.close() db.close() return render_template("word.html", result=result) @app.route('/face_recognition',methods=['GET', 'POST']) def face_recognition(): import cv2 import numpy as np import torch from facenet_pytorch import MTCNN, InceptionResnetV1 # Get face feature vector def load_known_faces(dstImgPath, savePath, mtcnn, resnet): aligned =  knownImg = cv2.imread(dstImgPath) # Read picture face = mtcnn(knownImg) # Use mtcnn to detect faces and return [face array] # New face detection from torchvision.transforms import ToPILImage show = ToPILImage() # Tensor can be converted into Image to facilitate visualization #show(face).show() show(face).save(savePath) if face is not None: aligned.append(face) aligned = torch.stack(aligned).to(device) with torch.no_grad(): known_faces_emb = resnet(aligned).detach().cpu() # resnet model is used to obtain the feature vector corresponding to the face return known_faces_emb, knownImg # Calculate the Euclidean distance between face feature vectors, set the threshold, and judge whether it is the same face def match_faces(faces_emb, known_faces_emb, threshold): isExistDst = False distance = (known_faces_emb - faces_emb).norm().item() print("Distance:", round(distance, 2)) print("Face feature vector\n", known_faces_emb, faces_emb) if (distance < threshold): isExistDst = True return isExistDst, distance device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') # mtcnn model loading [set network parameters for face detection] mtcnn = MTCNN(min_face_size=12, thresholds=[0.2, 0.2, 0.3], keep_all=True, device=device) # Used to load face feature vector resnet = InceptionResnetV1(pretrained='vggface2').eval().to(device) MatchThreshold = 0.8 # Face feature vector matching threshold setting known_faces_emb, _ = load_known_faces('static/huge1.jpg', 'static/new_huge1.jpg', mtcnn, resnet) # Known figure # bFaceThin.png lyf2.jpg faces_emb, img = load_known_faces('static/huge2.jpg', 'static/new_huge2.jpg', mtcnn, resnet) # Figure to be tested isExistDst ,distance= match_faces(faces_emb, known_faces_emb, MatchThreshold) # Face matching if isExistDst: boxes, prob, landmarks = mtcnn.detect(img, landmarks=True) # Return face box, probability, 5 face key points result = 'matching' else: result = 'Mismatch' return render_template("face_recognition.html", result=result, known_faces_emb=known_faces_emb, faces_emb=faces_emb, distance=distance) if __name__=='__main__': #Start method app.run(debug=True) # if __name__ == '__main__': # from gevent import pywsgi # # server = pywsgi.WSGIServer(('127.0.0.1', 5000), app) # server.serve_forever()