## edge detection

Edge detection is a common method of image segmentation based on gray level mutation. Its essence is to extract the features of discontinuous parts in the image. At present, the common edge detection operators include difference operator, Roberts operator, Sobel operator, Prewitt operator, Log operator and Canny operator.

Among them, Canny operator is an edge detection operator proposed by computer scientist John F. Canny in 1986. It is the most perfect edge detection algorithm in theory.

Canny operator has built-in API in common image processing tools such as MATLAB and OpenCV.

In OpenCV, the function used by Canny operator is Canny(), and its original function is as follows:

def Canny(image, threshold1, threshold2, edges=None, apertureSize=None, L2gradient=None)

- Image: indicates the source (input image) of this operation.
- threshold1: indicates the first threshold of the hysteresis process.
- threshold2: indicates the second threshold of the hysteresis process.

Next, operate our previous Mario and perform an edge detection on Mario to see the effect:

import cv2 as cv from matplotlib import pyplot as plt # Image reading img = cv.imread('maliao.jpg', 0) edges = cv.Canny(img, 100, 200) # Show results titles = ['Original Img', 'Edge Img'] images = [img, edges] # matplotlib plot for i in range(2): plt.subplot(1, 2, i+1), plt.imshow(images[i],'gray') plt.title(titles[i]) plt.xticks([]),plt.yticks([]) plt.show()

## principle

wave filtering

Before introducing the principle, I will briefly introduce the filtering. Those who have read the previous articles can skip this part.

The purpose of filtering is mainly two:

- The image features are extracted by filtering, and the information carried by the image is simplified as other subsequent image processing.
- In order to meet the needs of image processing, the noise mixed in image digitization is eliminated by filtering.

Edge detection is to simplify the image and represent the information carried by the image lock through edge information.

The filtering process can be understood as the traversal of a convolution kernel (matrix of 3 * 3 and 5 * 5) from top to bottom and from left to right on the image. Calculate the value of the filter and the corresponding pixel and perform numerical calculation according to the filtering purpose to return the value to the current pixel point.

The blue block in this figure represents the convolution kernel. The filtering process is to perform dot product operation on the image and assign it to the image.

The following content is too hard. Please be prepared. Let's start:)

Specific steps of Canny operator

Step 1: Gaussian filtering

Gauss filtering is the most popular denoising filtering algorithm at present. Gauss has the same meaning as the word "normal" in the normal distribution in our probability theory. Its principle is to carry out weighted averaging according to the parameter rules generated by the Gauss formula according to the gray values of the pixel points to be filtered and their neighborhood points, so as to effectively filter the high-frequency noise superimposed in the ideal image.

Step 2: calculate gradient image and angle image

Gradient image

The concept of gradient is introduced here. Gradient is a very important concept of artificial intelligence, which is widely used in the field of machine learning and deep learning. We start with the first-order differential equation. The following is the definition of the first-order differential equation of one-dimensional function:

People are dizzy with this series, right? In fact, the reflection on the geometric image is to find the tangent slope on the curve of a function, such as the following figure:

The filtering of the image is generally based on the gray-scale image, so the image is two-dimensional at this time. Therefore, let's look at the differential of the two-dimensional function, that is, the partial differential equation:

In fact, the geometric definition of the partial differential equation is the rate of change of the skew rate of the first-order differential equation.

In the image field, the gradient of the image is the partial derivative of the current pixel point on the x-axis and the y-axis, that is, the second derivative. In fact, it is the change rate of the oblique cut rate of the current pixel point on the x-axis and the y-axis. In the image field, it is the change rate of the pixel gray value.

A simple example:

In the figure, we can see that the gray value of the difference between 100 and 90 is 10, that is, the gradient of the current pixel point in the X-axis direction is 10, while the gradient of other points is 90. After derivation, it is found that the gradients are all 0. Therefore, we can find that in digital image processing, due to the particularity of its pixel properties, the form of calculus in image processing is to calculate the difference value of the current pixel point along the partial differential direction, Therefore, the actual application does not need to use derivation, only needs to carry out simple addition and subtraction operations.

Angle image

The calculation of the angle image is relatively simple. Its function is to provide guidance for the direction of non maximum suppression. The formula is as follows:

The calculated angle value generally takes one of four possible angles: 0 °, 45 °, 90 ° and 135 °.

Step 3: non maximum suppression of gradient image

The gradient image obtained from the previous step has many problems such as thick edge width and weak edge interference. We can then use non maximum suppression to find the local maximum value of pixel points, and set the gray value corresponding to the non maximum value to 0, so that a large part of non edge pixel points can be removed.

C represents the current non maximum suppression point, g1-4 represents the 8 pixel points around it, and the blue line segment in the figure represents the value of point C of the angle image calculated in the previous step, that is, the gradient direction.

The first step is to judge whether the gray value of C is the largest in the neighborhood. If so, continue to check whether the value of dtmp1 and dtmp2 at the intersection point in the gradient direction in the figure is greater than C. if the gray value of point C is greater than that of dtmp1 and dtmp2, point C is determined as the maximum value point and set to 1.

The result of this step will retain some thin lines as candidate edges.

Step 4: hysteresis double threshold

This step requires two thresholds: a high threshold and a low threshold.

After the above three steps, the edge quality has been very high, but there are still many false edges. Therefore, the algorithm used in Canny algorithm is the double threshold method.

- If the amplitude of a pixel position exceeds the high threshold, the pixel is retained as an edge pixel.
- If the amplitude of a pixel position is less than the high threshold, the pixel is excluded.
- If the amplitude of a pixel position is between two thresholds, the pixel is reserved only when connected to a pixel higher than the high threshold.

According to the high threshold image, the edge is linked into a contour. When the end point of the contour is reached, the algorithm will find the point satisfying the low threshold in the 8 neighborhood points of the breakpoint, and then collect new edges according to this point until the whole image is closed.

## Roberts operator

Roberts operator, also known as Roberts operator, is the simplest operator. It is an operator that uses local difference operators to find edges. He uses the difference between two adjacent pixels in the diagonal direction to approximate the gradient amplitude to detect the edge. The effect of detecting vertical edge is better than that of oblique edge. It has high positioning accuracy, is sensitive to noise, and cannot suppress the influence of noise.

In 1963, Roberts proposed this kind of edge searching operator. Roberts edge operator is a 2x2 template, which uses the difference between two adjacent pixels in the diagonal direction.

The template of Roberts operator is divided into horizontal direction and vertical direction, as shown below. From its template, it can be seen that Roberts operator can better enhance the image edges of positive and negative 45 degrees.

To realize the Roberts operator, we mainly use the filter2D() function in OpenCV. The main function of this function is to realize the convolution operation of the image through the convolution kernel:

def filter2D(src, ddepth, kernel, dst=None, anchor=None, delta=None, borderType=None)

- src: input image
- ddepth: the depth required by the target image
- Kernel: convolution kernel

Next, start to write code. First, read the image and convert the image into a gray-scale image. There is nothing to say about this:

# Read image img = cv.imread('maliao.jpg', cv.COLOR_BGR2GRAY) rgb_img = cv.cvtColor(img, cv.COLOR_BGR2RGB) # Grayscale processed image grayImage = cv.cvtColor(img, cv.COLOR_BGR2GRAY)

Then, the convolution kernel is constructed by using Numpy, and the gray-scale image is convoluted once in the x and y directions:

# Roberts operator kernelx = np.array([[-1, 0], [0, 1]], dtype=int) kernely = np.array([[0, -1], [1, 0]], dtype=int) x = cv.filter2D(grayImage, cv.CV_16S, kernelx) y = cv.filter2D(grayImage, cv.CV_16S, kernely)

Note: after the Roberts operator processing, you need to call the convertScaleAbs() function to calculate the absolute value and convert the image into 8 bitmaps for display before image fusion:

# To uint8, image fusion absX = cv.convertScaleAbs(x) absY = cv.convertScaleAbs(y) Roberts = cv.addWeighted(absX, 0.5, absY, 0.5, 0)

Finally, the image is displayed by pyplot:

# display graphics titles = ['original image ', 'Roberts operator'] images = [rgb_img, Roberts] for i in range(2): plt.subplot(1, 2, i + 1), plt.imshow(images[i], 'gray') plt.title(titles[i]) plt.xticks([]), plt.yticks([]) plt.show()

## Prewitt operator

Prewitt operator is a first-order differential operator for edge detection. It uses the gray difference between the upper and lower and the left and right adjacent points of the pixel to reach the extreme value at the edge to detect the edge, remove some false edges, and smooth the noise.

Since Prewitt operator uses 3 * 3 template to calculate the pixel value in the region, while Robert operator's template is 2 * 2, the edge detection result of Prewitt operator is more obvious than Robert operator in both horizontal and vertical directions. Prewitt operator is suitable for recognizing images with more noise and gradual gradation.

The template of Prewitt operator is as follows:

In terms of code implementation, the implementation process of Prewitt operator is similar to that of Roberts operator. I won't introduce it more, and I will post the code directly:

import cv2 as cv import numpy as np import matplotlib.pyplot as plt # Read image img = cv.imread('maliao.jpg', cv.COLOR_BGR2GRAY) rgb_img = cv.cvtColor(img, cv.COLOR_BGR2RGB) # Grayscale processed image grayImage = cv.cvtColor(img, cv.COLOR_BGR2GRAY) # Prewitt operator kernelx = np.array([[1,1,1],[0,0,0],[-1,-1,-1]],dtype=int) kernely = np.array([[-1,0,1],[-1,0,1],[-1,0,1]],dtype=int) x = cv.filter2D(grayImage, cv.CV_16S, kernelx) y = cv.filter2D(grayImage, cv.CV_16S, kernely) # To uint8, image fusion absX = cv.convertScaleAbs(x) absY = cv.convertScaleAbs(y) Prewitt = cv.addWeighted(absX, 0.5, absY, 0.5, 0) # Used to display Chinese labels normally plt.rcParams['font.sans-serif'] = ['SimHei'] # display graphics titles = ['original image ', 'Prewitt operator'] images = [rgb_img, Prewitt] for i in range(2): plt.subplot(1, 2, i + 1), plt.imshow(images[i], 'gray') plt.title(titles[i]) plt.xticks([]), plt.yticks([]) plt.show()

In terms of the results, Prewitt operator sharpens the extracted edge contour, and the edge detection result of the effect image is more obvious than that of Robert operator.

## Sobel operator

Sobel operator, whose Chinese name is Sobel operator, is a discrete differential operator used for edge detection. It combines Gaussian smoothing and differential derivation.

Sobel operator adds the concept of weight on the basis of Prewitt operator, and considers that the distance between adjacent points has different effects on the current pixel. The closer the distance, the greater the impact on the current pixel, so as to realize image sharpening and highlight the edge contour.

The algorithm template is as follows:

Sobel operator detects the edge according to the phenomenon that the gray weighted difference between the upper and lower and left and right adjacent points of the pixel reaches the extreme value at the edge. It can smooth the noise and provide more accurate edge direction information. Because Sobel operator combines Gaussian smoothing and differential differentiation (differentiation), the result will have more noise resistance. When the accuracy is not very high, Sobel operator is a commonly used edge detection method.

In Python, Sobel() function is provided for us to perform operation. The overall processing process is similar to the previous one. The code is as follows:

import cv2 as cv import matplotlib.pyplot as plt # Read image img = cv.imread('maliao.jpg', cv.COLOR_BGR2GRAY) rgb_img = cv.cvtColor(img, cv.COLOR_BGR2RGB) # Grayscale processed image grayImage = cv.cvtColor(img, cv.COLOR_BGR2GRAY) # Sobel operator x = cv.Sobel(grayImage, cv.CV_16S, 1, 0) y = cv.Sobel(grayImage, cv.CV_16S, 0, 1) # To uint8, image fusion absX = cv.convertScaleAbs(x) absY = cv.convertScaleAbs(y) Sobel = cv.addWeighted(absX, 0.5, absY, 0.5, 0) # Used to display Chinese labels normally plt.rcParams['font.sans-serif'] = ['SimHei'] # display graphics titles = ['original image ', 'Sobel operator'] images = [rgb_img, Sobel] for i in range(2): plt.subplot(1, 2, i + 1), plt.imshow(images[i], 'gray') plt.title(titles[i]) plt.xticks([]), plt.yticks([]) plt.show()

## Laplacian operator

Laplacian operator is a second-order differential operator in n-dimensional Euclidean space, which is often used in image enhancement and edge detection.

The core idea of Laplacian operator is to judge the gray value of the central pixel of the image and the gray value of other pixels around it. If the gray value of the central pixel is higher, the gray value of the central pixel will be increased; On the contrary, the gray level of the center pixel is reduced to realize the image sharpening operation.

In the implementation process, Laplacian operator calculates gradients in four or eight directions of the central pixel in the neighborhood, then adds the gradients to judge the relationship between the gray level of the central pixel and the gray level of other pixels in the neighborhood, and finally adjusts the gray level of the pixel through the result of gradient operation.

Laplacian operators are divided into four neighborhoods and eight neighborhoods. Four neighborhoods are gradients for the four directions of the central pixel in the neighborhood, and eight neighborhoods are gradients for the eight directions.

In OpenCV, Laplacian operator is encapsulated in Laplacian() function, which mainly uses the operation of Sobel operator to obtain the image sharpening result of the input image by adding the derivatives in the x direction and y direction of the image calculated by Sobel operator.

import cv2 as cv import matplotlib.pyplot as plt # Read image img = cv.imread('maliao.jpg', cv.COLOR_BGR2GRAY) rgb_img = cv.cvtColor(img, cv.COLOR_BGR2RGB) # Grayscale processed image grayImage = cv.cvtColor(img, cv.COLOR_BGR2GRAY) # Laplacian dst = cv.Laplacian(grayImage, cv.CV_16S, ksize = 3) Laplacian = cv.convertScaleAbs(dst) # Used to display Chinese labels normally plt.rcParams['font.sans-serif'] = ['SimHei'] # display graphics titles = ['original image ', 'Laplacian operator'] images = [rgb_img, Laplacian] for i in range(2): plt.subplot(1, 2, i + 1), plt.imshow(images[i], 'gray') plt.title(titles[i]) plt.xticks([]), plt.yticks([]) plt.show()

## Example:

The edge detection algorithm is mainly based on the first and second derivatives of the image intensity, but the derivatives are usually very sensitive to noise. Therefore, it is necessary to use filters to filter noise, call image enhancement or thresholding algorithms for processing, and finally carry out edge detection.

Finally, I use Gaussian filter to remove noise before edge detection:

import cv2 as cv import numpy as np import matplotlib.pyplot as plt # Read image img = cv.imread('maliao.jpg') rgb_img = cv.cvtColor(img, cv.COLOR_BGR2RGB) # Grayscale processed image gray_image = cv.cvtColor(img, cv.COLOR_BGR2GRAY) # Gaussian filtering gaussian_blur = cv.GaussianBlur(gray_image, (3, 3), 0) # Roberts operator kernelx = np.array([[-1, 0], [0, 1]], dtype = int) kernely = np.array([[0, -1], [1, 0]], dtype = int) x = cv.filter2D(gaussian_blur, cv.CV_16S, kernelx) y = cv.filter2D(gaussian_blur, cv.CV_16S, kernely) absX = cv.convertScaleAbs(x) absY = cv.convertScaleAbs(y) Roberts = cv.addWeighted(absX, 0.5, absY, 0.5, 0) # Prewitt operator kernelx = np.array([[1, 1, 1], [0, 0, 0], [-1, -1, -1]], dtype=int) kernely = np.array([[-1, 0, 1], [-1, 0, 1], [-1, 0, 1]], dtype=int) x = cv.filter2D(gaussian_blur, cv.CV_16S, kernelx) y = cv.filter2D(gaussian_blur, cv.CV_16S, kernely) absX = cv.convertScaleAbs(x) absY = cv.convertScaleAbs(y) Prewitt = cv.addWeighted(absX, 0.5, absY, 0.5, 0) # Sobel operator x = cv.Sobel(gaussian_blur, cv.CV_16S, 1, 0) y = cv.Sobel(gaussian_blur, cv.CV_16S, 0, 1) absX = cv.convertScaleAbs(x) absY = cv.convertScaleAbs(y) Sobel = cv.addWeighted(absX, 0.5, absY, 0.5, 0) # Laplace algorithm dst = cv.Laplacian(gaussian_blur, cv.CV_16S, ksize = 3) Laplacian = cv.convertScaleAbs(dst) # Display image titles = ['Source Image', 'Gaussian Image', 'Roberts Image', 'Prewitt Image','Sobel Image', 'Laplacian Image'] images = [rgb_img, gaussian_blur, Roberts, Prewitt, Sobel, Laplacian] for i in np.arange(6): plt.subplot(2, 3, i+1), plt.imshow(images[i], 'gray') plt.title(titles[i]) plt.xticks([]), plt.yticks([]) plt.show()

## Scharr operator

Before talking about Scharr operator, we must mention the Sobel operator we introduced earlier. Although Sobel operator can effectively extract image edges, it has poor effect on weak edges in the image.

This is because when Sobel operator calculates a relatively small kernel, its accuracy of approximate calculation derivative is relatively low. For example, when the gradient angle of a 3 * 3 Sobel operator approaches the horizontal or vertical direction, its inaccuracy is very obvious.

Therefore, the Scharr operator is introduced. Scharr operator is the enhancement of the difference of Sobel operator, and the principle and use of the two operators in detecting image edges are the same.

The main idea of Scharr operator is to enlarge the weight coefficient in the template to increase the difference between pixel values.

Scharr operator, also known as Scharr filter, is also used to calculate the image difference in the x or y direction. In OpenCV, it mainly exists with the operation of Sobel operator. The filter coefficients of the filter are as follows:

The method prototype of Scharr operator in OpenCV is as follows:

def Scharr(src, ddepth, dx, dy, dst=None, scale=None, delta=None, borderType=None):

- src: indicates the input image
- ddepth: indicates the depth required by the target image. For different input images, the output target image has different depths
- dx: indicates the difference order in the x direction, and the value is 1 or 0
- dy: represents the difference order in the y direction, and the value is 1 or 0

It can be seen that the functions Scharr() and Sobel() are very similar and are identical in use. Let's take a look at an example:

import cv2 as cv import matplotlib.pyplot as plt img = cv.imread("maliao.jpg") rgb_img = cv.cvtColor(img, cv.COLOR_BGR2RGB) gray_img = cv.cvtColor(img, cv.COLOR_BGR2GRAY) # Scharr operator x = cv.Scharr(gray_img, cv.CV_16S, 1, 0) # X direction y = cv.Scharr(gray_img, cv.CV_16S, 0, 1) # Y direction absX = cv.convertScaleAbs(x) absY = cv.convertScaleAbs(y) Scharr = cv.addWeighted(absX, 0.5, absY, 0.5, 0) # display graphics plt.rcParams['font.sans-serif']=['SimHei'] titles = ['original image ', 'Scharr operator'] images = [rgb_img, Scharr] for i in range(2): plt.subplot(1, 2, i + 1), plt.imshow(images[i], 'gray') plt.title(titles[i]) plt.xticks([]), plt.yticks([]) plt.show()

## LOG Operator

LOG (Laplacian of Gaussian) edge detection operator was jointly proposed by David Courtnay Marr and Ellen Hildreth in 1980, also known as Marr & Hildreth operator. It can find the optimal filter for edge detection according to the signal-to-noise ratio of the image. The algorithm first performs Gaussian filtering on the image, and then obtains its Laplacian second derivative. The boundary of the image is detected according to the zero crossing of the second derivative, that is, the edge of the image or object is obtained by detecting zero crossing of the filtering result.

LOG operator is actually a combination of Gauss filter and Laplacian filter, which smoothes out noise first and then performs edge detection.

LOG operator is similar to the mathematical model in visual physiology, so it has been widely used in the field of image processing.

It has the characteristics of strong anti-interference ability, high boundary positioning accuracy, good edge continuity, and can effectively extract the boundary with weak contrast.

The common LOG operator is a template of 5 * 5;

The relation curve between the distance from the LOG operator to the center and the position weighting coefficient is like the profile of the Mexican straw hat, so the LOG operator is also called the Mexican straw hat filter.

import cv2 as cv import matplotlib.pyplot as plt # Read image img = cv.imread("maliao.jpg") rgb_img = cv.cvtColor(img, cv.COLOR_BGR2RGB) gray_img = cv.cvtColor(img, cv.COLOR_BGR2GRAY) # Firstly, the noise is reduced by Gaussian filtering gaussian = cv.GaussianBlur(gray_img, (3, 3), 0) # Then the Laplace operator is used for edge detection dst = cv.Laplacian(gaussian, cv.CV_16S, ksize=3) LOG = cv.convertScaleAbs(dst) # Used to display Chinese labels normally plt.rcParams['font.sans-serif'] = ['SimHei'] # display graphics titles = ['original image ', 'LOG operator'] images = [rgb_img, LOG] for i in range(2): plt.subplot(1, 2, i + 1), plt.imshow(images[i], 'gray') plt.title(titles[i]) plt.xticks([]), plt.yticks([]) plt.show()

## Summary

The edge detection algorithm is mainly based on the first and second derivatives of the image intensity, but the derivatives are usually very sensitive to noise. Therefore, it is necessary to use filters to filter noise, call image enhancement or thresholding algorithms for processing, and finally carry out edge detection.