## Initialization of various tensors

### Create a special type of tensor

a = torch.FloatTensor(2,3) a = torch.DoubleTensor(2,3) ...

### Set the default type of tensor in pytorch

torch.set_default_tensor_type(torch.DoubleTensor)

### Change tensor type

a.float()

### Various common initialization

torch.randn_like() torch.rand(3,3) #Create 0-1 (3,3) matrix torch.randn(3,3) #Create - 1-1 (3,3) matrix torch.randint(1,10,[2,2]) #Create 1-10 (2,2) int matrix

### Initialize according to different mean and variance

torch.normal(mean=torch.full([20],0),std=torch.arange(0,1,0.1))

### Initialize at intervals

torch.linspace(0,10,step=3) torch.arange(1,10,5)

### Create identity matrix

torch.eye(4,4)

### Create a scrambled sequence

torch.randperm(10)

### Returns the number of tensor elements

torch.numel(torch.rand(2,2))

## Dimension operation

### Matrix splicing

torch.cat((x,x),0) torch.stack((x,x),0) #Unlike cat, stack needs to add a dimension when splicing

### Matrix splitting

chuck splits directly by quantity. Enter N to split into N pieces

torch.chunk(a,N,dim)

There are two uses of split. The first is to enter a number, which will be split into the total dimension / number dimension. The second is to enter a list, which will be split according to the dimension specified in the list

torch.split(a,[1,2],dim)

### Matrix selection

Select N consecutive columns or rows on a dimension

torch.narrow(dim,index,size)

Select a dimension dim, and get the size column or row from the index

a.index_select(dim, list)

### Various selections

a[ : , 1:10, ::2 , 1:10:2]

### Select after matrix leveling

torch.take( tensor , list)

### Dimensional change

a.view(1,5) a.reshape(1,5)

### Dimension decrease and increase

When there is only one dimension, that is, 0 is inserted in front and - 1 or 1 is inserted after. You can regard the list as a 0.5 dimension

a.unsqueeze(1) a.squeeze(1)

### Dimensional expansion

a.expand()

Dimension expansion: note that the dimension here can only be expanded from 1 to N. in other cases, it cannot be expanded. In addition, when the dimension is unchanged, it can also be replaced by - 1

a.repead()

Another way is to use the repeat function. Repeat indicates how many times the previous dimension is copied and expanded by copying

### Dimension exchange

transpose(2，3) # Swap two dimensions permute(4,2,1,3) # Swap multiple dimensions

## Mathematical operation

### Fundamentals of operation

Among them, addition, subtraction and division can be calculated directly by operators. In multiplication, two different multiplications need to be paid extra attention, among which:

mul or * is the multiplication of the corresponding elements of the matrix

mm is the normal multiplication of matrices for two dimensions

matmul is the normal multiplication for any dimension matrix, @ is its symbol overload

### Digital approximation

floor() rounded down

ceil() round up

trunc() reserved integer

frac() keep decimal

### Numerical clipping

clamp(min)

Clap (min, max) # in addition to this threshold, all become threshold

### accumulative multiplication

prod()

### Linear algebraic correlation

trace #Trace of matrix diag #Gets the main diagonal element triu/tril #Get upper and lower triangular matrix t #Transpose dot/cross #Inner product and outer product

## other

### Numpy Tensor mutual conversion

np_data = np.arange(6).reshape((2, 3)) torch_data = torch.from_numpy(np_data) tensor2array = torch_data.numpy()

### Type judgment

isinstance(a,torch.FloatTensor)

### radio broadcast

When broadcasting can be used, broadcasting will start from the last dimension and match from back to front. When the dimension of an object is 1 or the same size as that of another object, it can be matched. In addition, if the dimension of an object is less than that of another dimension, broadcasting can be used as long as the dimension from back to front matches.

for example

(1,2,3,4) and (2,3,4) or (1,2,3,4) can be broadcast

(1,2,3,4) and (1,1,1) or (1,1,1) can be broadcast

### topk

topk can help return the largest k values and subscripts in a certain dimension. You can return the smallest k values by simply setting largest=False

### where condition selection

Select the elements in matrix X or matrix Y according to whether the condition is true

where(condition > 0.5 , X , Y )

### gather

The essence is to look up a table. The first parameter is the table, the second is the dimension, and the third is the index to be queried

The operation is to select the dimension dim in the input, and then read the elements in the input according to the index number

torch.gather(input,dim,index,out=None)