I still have the possibility of change

At the thought of this

My heart surged

### 1, Line detection

The premise of using Hough line transform for line detection is that the edge detection has been completed

# Standard Hough line transformation cv2.HoughLines(image, rho, theta, threshold, lines=None, srn=None, stn=None, min_theta=None, max_theta=None)

- Image: output image after edge detection, 8-bit, single channel binary source image
- rho: distance step
- theta: angle step
- Threshold: threshold. Only points greater than this value can be regarded as the maximum value, that is, at least how many sinusoids intersect at one point can be regarded as a straight line

# Statistical probability Hough line transform cv2.HoughLinesP(image, rho, theta, threshold, lines=None, minLineLength=None, maxLineGap=None)

- Output image after edge detection, 8-bit, single channel binary source image
- rho: parameter polar diameter r, resolution in pixel value. Generally, 1 pixel is used here
- Theta: parameter polar angle theta, resolution in radians, where 1 degree is used
- threshold: the minimum curve intersection required to detect a straight line
- minLineLength: the shortest length of a line. Anything shorter than this line will be ignored
- maxLineGap: the maximum interval between two lines. If it is less than this value, the two lines will be regarded as one line

import cv2 import numpy as np # Standard Hough line transformation def line_detection_demo(image): # Gray image gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Extract edge edges = cv2.Canny(gray, 50, 150, apertureSize=3) lines = cv2.HoughLines(edges, 1, np.pi/180, 160) for line in lines: rho, theta = line[0] # line[0] stores the polar diameter and polar angle from the point to the straight line, where the polar angle is expressed in radians a = np.cos(theta) # theta is radian b = np.sin(theta) x0 = a * rho y0 = b * rho x1 = int(x0 + 1000 * (-b)) # Abscissa of starting point of straight line y1 = int(y0 + 1000 * a) # Ordinate of the starting point of the line x2 = int(x0 - 1000 * (-b)) # Abscissa of line end point y2 = int(y0 - 1000 * a) # Ordinate of line end point cv2.line(image, (x1, y1), (x2, y2), (0, 0, 255), 2) cv2.imshow("image_lines", image) # Statistical probability Hough line transform def line_detect_possible_demo(image): gray = cv2.cvtColor(image, cv2.COLOR_BGRA2GRAY) edges = cv2.Canny(gray, 50, 150, apertureSize=3) lines = cv2.HoughLinesP(edges, 1, np.pi / 180, 100, minLineLength=50, maxLineGap=10) for line in lines: x1, y1, x2, y2 = line[0] cv2.line(image, (x1, y1), (x2, y2), (0, 0, 255), 2) cv2.imshow("line_detect_possible", image) if __name__ == "__main__": src = cv2.imread(r"./test/041.png") cv2.imshow("image", src) line_detection_demo(src) line_detect_possible_demo(src) cv2.waitKey(0) cv2.destroyAllWindows()

The operation effect is as follows:

### 2, Circle detection

- The basic principle of Hough circle transformation is similar to that of Hough line transformation, except that the two-dimensional polar diameter and polar angle space corresponding to points are replaced by three-dimensional center and radius space. In the Standard Hough circle transformation, all possible circles corresponding to any point of the edge image of the original image passing through this point are represented by three parameters of circle center and radius in three-dimensional space, which corresponds to a curve in three-dimensional space. For multiple edge points, the more points, the more three-dimensional space curves corresponding to these points intersect at one point, the more points on the common circle they pass through. Similarly, we can use the same threshold method to judge whether a circle is detected. This is the principle of Standard Hough circle transformation, but it is also the reason for the great increase in the amount of calculation in three-dimensional space, The change of Standard Hough circle is difficult to be applied in practice.
- OpenCV implements a more flexible detection method than the Standard Hough circle transform - Hough gradient method, which greatly reduces the amount of computation compared with the Standard Hough circle transform. The detection principle is based on that the center of the circle must be on the module vector of each point on the circle, and the intersection of the module vectors of these points on the circle is the center of the circle. The first step of Hoff gradient method is to find these centers, so that the three-dimensional accumulation plane can be transformed into the two-dimensional accumulation plane. The second step is to determine the radius according to the support of the edge non-0 pixels of all candidate centers. Note: the modulus vector is the vertical line of the tangent of the point on the circle.

cv2.HoughCircles(image, method, dp, minDist, circles=None, param1=None, param2=None, minRadius=None, maxRadius=None)

- Image: input image, 8-bit single channel gray image
- Method: circle detection method
- dp: the parameter represents the inverse parameter of the resolution of the accumulator compared with the original image. For example, if dp = 1, the accumulator has the same resolution as the input image. If dp=2, the resolution of the accumulator is half of the element image, and the width and height are reduced to half of the original
- minDist: the minimum distance between the detected centers of two circles. If the parameter is too small, multiple adjacent circles may be incorrectly detected in addition to the real circle. If it is too large, some circles may be missed
- circles: the output vector of the detected circle. The first element in the vector is the abscissa of the circle, the second is the ordinate, and the third is the radius
- param1: the high threshold of Canny edge detection. The low threshold will be automatically set to half of the high threshold
- param2: accumulation threshold of circle center detection. The smaller the parameter value, the more false circles can be detected, but the circle corresponding to the larger accumulator value is returned
- minRadius: the minimum radius of the detected circle
- maxRadius: the maximum radius of the detected circle

import cv2 as cv import numpy as np # Hoff circle detection def detect_circles_demo(image): # Huff circle detection is sensitive to noise, edge preserving filtering EPF eliminates noise dst = cv.pyrMeanShiftFiltering(image, 10, 105) # Mean shift filtering cimage = cv.cvtColor(dst, cv.COLOR_RGB2GRAY) # Input the low value of the minimum distance edge extraction of the walking step of the image method circles = cv.HoughCircles(cimage, cv.HOUGH_GRADIENT, 1, 20, param1=50, param2=30, minRadius=0, maxRadius=0) circles = np.uint16(np.around(circles)) # Turn the values of the center and radius contained in circles into integers for i in circles[0, :]: cv.circle(image, (i[0], i[1]), i[2], (0, 0, 255), 2) # Draw a circle cv.circle(image, (i[0], i[1]), 2, (255, 0, 0), 2) # Draw the center of a circle cv.imshow("circles", image) if __name__ == "__main__": src = cv.imread(r"./test/035.png") cv.imshow('input_image', src) detect_circles_demo(src) cv.waitKey(0) cv.destroyAllWindows()

The operation effect is as follows:

### 3, Contour discovery

""" cv2.findContours(image, mode, method, contours, hierarchy, offset) Parameters: 1 To find the contour of an image, you can only pass in a binary image, not a gray image 2 There are four retrieval modes of contour: cv2.RETR_EXTERNAL Indicates that only the outer contour is detected cv2.RETR_LIST The detected contour does not establish a hierarchical relationship cv2.RETR_CCOMP Two levels of contours are established. The upper layer is the outer boundary and the inner layer is the boundary information of the inner hole. If there is another connected object in the inner hole, the boundary of the object is also on the top layer cv2.RETR_TREE Establish the outline of a hierarchical tree structure 3 Approximate method of contour cv2.CHAIN_APPROX_NONE Store all contour points, and the pixel position difference of two adjacent points shall not exceed 1, i.e max(abs(x1-x2)，abs(y2-y1))==1 cv2.CHAIN_APPROX_SIMPLE Compress the elements in the horizontal, vertical and diagonal directions, and only retain the end coordinates of the direction. For example, a rectangular contour only needs 4 points to save the contour information Return value: contours: A list. Each item is a contour. All points of the contour will not be stored, but only those that can describe the contour hierarchy: One ndarray, The number of elements is the same as the number of contours, Each contour contours[i]Corresponding to 4 hierarchy element hierarchy[i][0] ~hierarchy[i][3]， Indicates the index number of the next contour, the previous contour, the parent contour and the embedded contour respectively. If there is no corresponding item, the value is negative """ """ # Function CV2 drawContours(image, contours, contourIdx, color, thickness, lineType, hierarchy, maxLevel, offset) # The first parameter is a picture, which can be the original or other. # The second parameter is contour, or CV2 The point set found by findcontours() is a list. # The third parameter is the index of the contour (the second parameter). It is very useful when you need to draw independent contours. If you want to draw all of them, you can set it to - 1. # The next parameters are lineweight and outline color """

import cv2 def contours_demo(image): dst = cv2.GaussianBlur(image, (3, 3), 0) gray = cv2.cvtColor(dst, cv2.COLOR_BGR2GRAY) ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU) # Get the modified image and the index of the contour of each layer of the contour point set contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) for i, contour in enumerate(contours): # What color lineweight does the outline draw cv2.drawContours(image, contours, i, (0, 0, 255), -1) print(i) cv2.imshow("detect contours", image) if __name__ == "__main__": img = cv2.imread(r"./test/042.png") cv2.namedWindow("input image", cv2.WINDOW_AUTOSIZE) cv2.imshow("input image", img) contours_demo(img) cv2.waitKey(0) cv2.destroyAllWindows()

The operation effect is as follows:

When drawing the outline, the lineweight is set to - 1 to realize filling. The effect is as follows:

The binary image is obtained by Canny algorithm, and then the contour is found

import cv2 def edge_demo(image): # Gaussian blur noise reduction blurred = cv2.GaussianBlur(image, (3, 3), 0) # Convert to grayscale image gray = cv2.cvtColor(blurred, cv2.COLOR_BGR2GRAY) # Calculate the gradient in X and Y direction grad_x = cv2.Sobel(gray, cv2.CV_16SC1, 1, 0) grad_y = cv2.Sobel(gray, cv2.CV_16SC1, 0, 1) edge_output = cv2.Canny(grad_x, grad_y, 50, 100) return edge_output def contours_demo(image): binary = edge_demo(image) # Get the modified image and the index of the contour of each layer of the contour point set contours, hierarchy = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) for i, contour in enumerate(contours): # What color lineweight does the outline draw cv2.drawContours(image, contours, i, (0, 0, 255), 2) print(i) cv2.imshow("detect contours", image) if __name__ == "__main__": img = cv2.imread(r"./test/036.png") cv2.namedWindow("input image", cv2.WINDOW_AUTOSIZE) cv2.imshow("input image", img) contours_demo(img) cv2.waitKey(0) cv2.destroyAllWindows()

The operation effect is as follows: