This article is from wechat official account: [machine learning alchemy]. Author WX: cyx645016617

Reference Catalogue:

In the next part, we will mainly explain these contents:

- Four pool layers;
- Two Normalization layers;

## 1 pool layer

Corresponding to convolution layer, each pooling layer has three types: 1D, 2D and 3D. Here we mainly introduce an operation of 2D image processing. 1D and 3D can be reasonably analogized.

### 1.1 maximum pool layer

tf.keras.layers.MaxPooling2D( pool_size=(2, 2), strides=None, padding="valid", data_format=None, **kwargs )

By default, the steps of this stripes are 2. Here is an example:

import tensorflow as tf x = tf.random.normal((4,28,28,3)) y = tf.keras.layers.MaxPooling2D( pool_size=(2,2)) print(y(x).shape) >>> (4, 14, 14, 3)

If you change stripes to 1:

import tensorflow as tf x = tf.random.normal((4,28,28,3)) y = tf.keras.layers.MaxPooling2D( pool_size=(2,2), strides = 1) print(y(x).shape) >>> (4, 27, 27, 3)

If you change padding to 'same':

import tensorflow as tf x = tf.random.normal((4,28,28,3)) y = tf.keras.layers.MaxPooling2D( pool_size=(2,2), strides = 1, padding='same') print(y(x).shape) >>> (4, 28, 28, 3)

The default value of this padding is' valid '. Generally, stripes is 2 and padding is valid.

### 1.2 average pool layer

In the same way as the maximum pooling layer above, we will show an API here, so we won't say more.

tf.keras.layers.AveragePooling2D( pool_size=(2, 2), strides=None, padding="valid", data_format=None, **kwargs )

### 1.3 global maximum pooling layer

tf.keras.layers.GlobalMaxPooling2D(data_format=None, **kwargs)

This is actually equivalent to pool_ A maximum pool layer whose size is equal to the size of the feature graph. Take an example:

import tensorflow as tf x = tf.random.normal((4,28,28,3)) y = tf.keras.layers.GlobalMaxPooling2D() print(y(x).shape) >>> (4, 3)

It can be seen that a channel will only output one value. Because the size of our input characteristic graph is \ (28\times 28 \), the global maximum pooling layer here is equivalent to pool_size=28.

### 1.4 global average pooling layer

It is equivalent to the global maximum pooling layer above.

tf.keras.layers.GlobalAveragePooling2D(data_format=None, **kwargs)

## 2 Normalization

Keras officials only provide two Normalization methods, batch Normalization and LayerNormalization. Although the methods of InstanceNormalization and GroupNormalization are not provided, we can build it by modifying the parameters of BN layer.

### 2.1 BN

tf.keras.layers.BatchNormalization( axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer="zeros", gamma_initializer="ones", moving_mean_initializer="zeros", moving_variance_initializer="ones", beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, renorm=False, renorm_clipping=None, renorm_momentum=0.99, fused=None, trainable=True, virtual_batch_size=None, adjustment=None, name=None, **kwargs )

Let's explain the parameters in detail:

- Axis: integer. Indicates which dimension is the channel number dimension. The default is - 1, indicating the last dimension. If channels was set previously_ First, you need to set axis=1
- momentum: during the training process, the mean variance of the batch will be calculated according to the batch. During prediction or verification, the mean variance is calculated by the sliding mean and sliding variance calculated in the training process. The specific calculation process is:

- epsilon: a decimal fraction that prevents the division of an operation by 0. It is generally not modified;
- center: if True, there will be a trainable parameter beta, that is, the offset of the beta mean; If False, the BN layer degenerates into Normalization with 0 as the mean and gamma as the standard deviation. The default value is True. Generally, it is not modified.
- scale: similar to center, the default is True. If it is False, the gamma parameter is not used, and the BN layer degenerates into a Normalization layer with beta as the mean and 1 as the standard deviation.
- Other methods are initialization methods and regularization methods, which are generally not limited. The methods used have also been explained in the last class, and will not be repeated here.

One thing to note here is that the keras API does not have the parameter group in the PyTorch API. In this case, it cannot be derived into GN and InstanceN layers. Later content will be in Tensorflow_Addons Library

### 2.2 LN

tf.keras.layers.LayerNormalization( axis=-1, epsilon=0.001, center=True, scale=True, beta_initializer="zeros", gamma_initializer="ones", beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None, trainable=True, name=None, **kwargs )

The parameters are basically consistent with those of BN. Take a direct look at an example:

import tensorflow as tf import numpy as np x = tf.constant(np.arange(10).reshape(5,2)*10, dtype=tf.float32) print(x) y = tf.keras.layers.LayerNormalization(axis=1) print(y(x))

The operation result is:

tf.Tensor( [[ 0. 10.] [20. 30.] [40. 50.] [60. 70.] [80. 90.]], shape=(5, 2), dtype=float32) tf.Tensor( [[-0.99998 0.99998] [-0.99998 0.99998] [-0.99998 0.99998] [-0.99998 0.99998] [-0.99998 0.99998]], shape=(5, 2), dtype=float32)

IN my previous article, I have introduced the detailed principles of LN, BN, GN and IN normalization layers. For those who do not understand, please see the relevant links at the end of this article.