Skip to content

Latest commit

 

History

History
728 lines (555 loc) · 15.3 KB

4-1,张量的结构操作.md

File metadata and controls

728 lines (555 loc) · 15.3 KB

4-1, tensor structure operation

The operations of tensors mainly include structural operations of tensors and mathematical operations of tensors.

Tensor structure operations such as: tensor creation, index slicing, dimension transformation, merge and split.

Tensor mathematical operations mainly include: scalar operations, vector operations, and matrix operations. In addition, we will introduce the broadcasting mechanism of tensor operations.

In this article we introduce the structural operations of tensors.

One, create a tensor

Many methods of tensor creation are similar to the methods of array creation in numpy.

import numpy as np
import torch
a = torch.tensor([1,2,3],dtype = torch.float)
print(a)
tensor([1., 2., 3.])
b = torch.arange(1,10,step = 2)
print(b)
tensor([1, 3, 5, 7, 9])
c = torch.linspace(0.0,2*3.14,10)
print(c)
tensor([0.0000, 0.6978, 1.3956, 2.0933, 2.7911, 3.4889, 4.1867, 4.8844, 5.5822,
        6.2800])
d = torch.zeros((3,3))
print(d)
tensor([[0., 0., 0.],
        [0., 0., 0.],
        [0., 0., 0.]])
a = torch.ones((3,3),dtype = torch.int)
b = torch.zeros_like(a,dtype = torch.float)
print(a)
print(b)
tensor([[1, 1, 1],
        [1, 1, 1],
        [1, 1, 1]], dtype=torch.int32)
tensor([[0., 0., 0.],
        [0., 0., 0.],
        [0., 0., 0.]])
torch.fill_(b,5)
print(b)
tensor([[5., 5., 5.],
        [5., 5., 5.],
        [5., 5., 5.]])
#Uniform random distribution
torch.manual_seed(0)
minval,maxval = 0,10
a = minval + (maxval-minval)*torch.rand([5])
print(a)
tensor([4.9626, 7.6822, 0.8848, 1.3203, 3.0742])
#Normally distributed random
b = torch.normal(mean = torch.zeros(3,3), std = torch.ones(3,3))
print(b)
tensor([[-1.3836, 0.2459, -0.1312],
        [-0.1785, -0.5959, 0.2739],
        [0.5679, -0.6731, -1.2095]])

#Normally distributed random
mean,std = 2,5
c = std*torch.randn((3,3))+mean
print(c)
tensor([[ 8.7204, 13.9161, -0.8323],
        [-3.7681, -10.5115, 6.3778],
        [-11.3628, 1.8433, 4.4939]])
#Integer random arrangement
d = torch.randperm(20)
print(d)
tensor([ 5, 15, 19, 10, 7, 17, 0, 4, 12, 16, 14, 13, 1, 3, 9, 6, 18, 2,
         8, 11])
#Special matrix
I = torch.eye(3,3) #identity matrix
print(I)
t = torch.diag(torch.tensor([1,2,3])) #Diagonal matrix
print(t)
tensor([[1., 0., 0.],
        [0., 1., 0.],
        [0., 0., 1.]])
tensor([[1, 0, 0],
        [0, 2, 0],
        [0, 0, 3]])

Two, index slice

The index slicing method of tensors is almost the same as numpy. Support default parameters and ellipsis when slicing.

Some elements can be modified through indexing and slicing.

In addition, for irregular slice extraction, torch.index_select, torch.masked_select, torch.take can be used

If you want to get a new tensor by modifying some elements of the tensor, you can use torch.where, torch.masked_fill, torch.index_fill

#Uniform random distribution
torch.manual_seed(0)
minval,maxval = 0,10
t = torch.floor(minval + (maxval-minval)*torch.rand([5,5])).int()
print(t)
tensor([[4, 7, 0, 1, 3],
        [6, 4, 8, 4, 6],
        [3, 4, 0, 1, 2],
        [5, 6, 8, 1, 2],
        [6, 9, 3, 8, 4]], dtype=torch.int32)
#第0线
print(t[0])
tensor([4, 7, 0, 1, 3], dtype=torch.int32)
#Last line
print(t[-1])
tensor([6, 9, 3, 8, 4], dtype=torch.int32)
#1st row 3rd column
print(t[1,3])
print(t[1][3])
tensor(4, dtype=torch.int32)
tensor(4, dtype=torch.int32)
#1st line to 3rd line
print(t[1:4,:])
tensor([[6, 4, 8, 4, 6],
        [3, 4, 0, 1, 2],
        [5, 6, 8, 1, 2]], dtype=torch.int32)
#1st row to the last row, from the 0th column to the last column, take a column every two columns
print(t[1:4,:4:2])
tensor([[6, 8],
        [3, 0],
        [5, 8]], dtype=torch.int32)
#Can use index and slice to modify some elements
x = torch.tensor([[1,2],[3,4]],dtype = torch.float32,requires_grad=True)
x.data[1,:] = torch.tensor([0.0,0.0])
x
tensor([[1., 2.],
        [0., 0.]], requires_grad=True)
a = torch.arange(27).view(3,3,3)
print(a)
tensor([[[ 0, 1, 2],
         [3, 4, 5],
         [6, 7, 8]],

        [[ 9, 10, 11],
         [12, 13, 14],
         [15, 16, 17]],

        [[18, 19, 20],
         [21, 22, 23],
         [24, 25, 26]]])
#Ellipsis can mean multiple colons
print(a[...,1])
tensor([[ 1, 4, 7],
        [10, 13, 16],
        [19, 22, 25]])

The above slicing method is relatively regular. For irregular slice extraction, torch.index_select, torch.take, torch.gather, torch.masked_select can be used.

Consider the example of a class transcript. There are 4 classes, 10 students in each class, and 7 subjects for each student. It can be represented by a 4×10×7 tensor.

minval=0
maxval=100
scores = torch.floor(minval + (maxval-minval)*torch.rand([4,10,7])).int()
print(scores)
tensor([[[55, 95, 3, 18, 37, 30, 93],
         [17, 26, 15, 3, 20, 92, 72],
         [74, 52, 24, 58, 3, 13, 24],
         [81, 79, 27, 48, 81, 99, 69],
         [56, 83, 20, 59, 11, 15, 24],
         [72, 70, 20, 65, 77, 43, 51],
         [61, 81, 98, 11, 31, 69, 91],
         [93, 94, 59, 6, 54, 18, 3],
         [94, 88, 0, 59, 41, 41, 27],
         [69, 20, 68, 75, 85, 68, 0]],

        [[17, 74, 60, 10, 21, 97, 83],
         [28, 37, 2, 49, 12, 11, 47],
         [57, 29, 79, 19, 95, 84, 7],
         [37, 52, 57, 61, 69, 52, 25],
         [73, 2, 20, 37, 25, 32, 9],
         [39, 60, 17, 47, 85, 44, 51],
         [45, 60, 81, 97, 81, 97, 46],
         [5, 26, 84, 49, 25, 11, 3],
         [7, 39, 77, 77, 1, 81, 10],
         [39, 29, 40, 40, 5, 6, 42]],

        [[50, 27, 68, 4, 46, 93, 29],
         [95, 68, 4, 81, 44, 27, 89],
         [9, 55, 39, 85, 63, 74, 67],
         [37, 39, 8, 77, 89, 84, 14],
         [52, 14, 22, 20, 67, 20, 48],
         [52, 82, 12, 15, 20, 84, 32],
         [92, 68, 56, 49, 40, 56, 38],
         [49, 56, 10, 23, 90, 9, 46],
         [99, 68, 51, 6, 74, 14, 35],
         [33, 42, 50, 91, 56, 94, 80]],

        [[18, 72, 14, 28, 64, 66, 87],
         [33, 50, 75, 1, 86, 8, 50],
         [41, 23, 56, 91, 35, 20, 31],
         [0, 72, 25, 16, 21, 78, 76],
         [88, 68, 33, 36, 64, 91, 63],
         [26, 26, 2, 60, 21, 5, 93],
         [17, 44, 64, 51, 16, 9, 89],
         [58, 91, 33, 64, 38, 47, 19],
         [66, 65, 48, 38, 19, 84, 12],
         [70, 33, 25, 58, 24, 61, 59]]], dtype=torch.int32)
# Extract all grades of the 0th student, 5th student, and 9th student in each class
torch.index_select(scores,dim = 1,index = torch.tensor([0,5,9]))
tensor([[[55, 95, 3, 18, 37, 30, 93],
         [72, 70, 20, 65, 77, 43, 51],
         [69, 20, 68, 75, 85, 68, 0]],

        [[17, 74, 60, 10, 21, 97, 83],
         [39, 60, 17, 47, 85, 44, 51],
         [39, 29, 40, 40, 5, 6, 42]],

        [[50, 27, 68, 4, 46, 93, 29],
         [52, 82, 12, 15, 20, 84, 32],
         [33, 42, 50, 91, 56, 94, 80]],

        [[18, 72, 14, 28, 64, 66, 87],
         [26, 26, 2, 60, 21, 5, 93],
         [70, 33, 25, 58, 24, 61, 59]]], dtype=torch.int32)
#Draw the scores of the 0th student, 5th student, 9th student of the first course, 3rd course, and 6th course of each class
q = torch.index_select(torch.index_select(scores,dim = 1,index = torch.tensor([0,5,9]))
                   ,dim=2,index = torch.tensor([1,3,6]))
print(q)
tensor([[[95, 18, 93],
         [70, 65, 51],
         [20, 75, 0]],

        [[74, 10, 83],
         [60, 47, 51],
         [29, 40, 42]],

        [[27, 4, 29],
         [82, 15, 32],
         [42, 91, 80]],

        [[72, 28, 87],
         [26, 60, 93],
         [33, 58, 59]]], dtype=torch.int32)
#Draw the 0th course of the 0th student of the 0th class, the 1st course of the 4th student of the 2nd class, and the 6th course scores of the 9th student of the 3rd class
#take treats the input as a one-dimensional array, and the output has the same shape as the index
s = torch.take(scores,torch.tensor([0*10*7+0,2*10*7+4*7+1,3*10*7+9*7+6]))
s
<tf.Tensor: shape=(3, 7), dtype=int32, numpy=
array([[52, 82, 66, 55, 17, 86, 14],
       [99, 94, 46, 70, 1, 63, 41],
       [46, 83, 70, 80, 90, 85, 17]], dtype=int32)>
#Draw scores greater than or equal to 80 points (Boolean index)
#The result is a 1-dimensional tensor
g = torch.masked_select(scores,scores>=80)
print(g)

The above methods can only extract part of the element values ​​of the tensor, but cannot change part of the element values ​​of the tensor to obtain a new tensor.

If you want to get a new tensor by modifying some element values ​​of the tensor, you can use torch.where, torch.index_fill and torch.masked_fill

torch.where can be understood as a tensor version of if.

The selection element logic of torch.index_fill is the same as torch.index_select.

The selected element logic of torch.masked_fill is the same as torch.masked_select.

#If the score is greater than 60 points, the assignment is 1, otherwise the assignment is 0
ifpass = torch.where(scores>60,torch.tensor(1),torch.tensor(0))
print(ifpass)
#Assign all the scores of the 0th student, 5th student and 9th student in each class to full marks
torch.index_fill(scores,dim = 1,index = torch.tensor([0,5,9]),value = 100)
#Equivalent to scores.index_fill(dim = 1,index = torch.tensor([0,5,9]),value = 100)
#Assign scores less than 60 points into 60 points
b = torch.masked_fill(scores,scores<60,60)
#Equivalent to b = scores.masked_fill(scores<60,60)
b

Three, dimension transformation

Dimension transformation related functions mainly include torch.reshape (or call the view method of tensor), torch.squeeze, torch.unsqueeze, torch.transpose

torch.reshape can change the shape of a tensor.

torch.squeeze can reduce dimensionality.

torch.unsqueeze can increase the dimension.

torch.transpose can exchange dimensions.

# The view method of tensor sometimes fails to call, you can use the reshape method.

torch.manual_seed(0)
minval,maxval = 0,255
a = (minval + (maxval-minval)*torch.rand([1,3,3,2])).int()
print(a.shape)
print(a)
torch.Size([1, 3, 3, 2])
tensor([[[[126, 195],
          [22, 33],
          [78, 161]],

         [[124, 228],
          [116, 161],
          [88, 102]],

         [[ 5, 43],
          [74, 132],
          [177, 204]]]], dtype=torch.int32)
# Change to a tensor of shape (3,6)
b = a.view([3,6]) #torch.reshape(a,[3,6])
print(b.shape)
print(b)
torch.Size([3, 6])
tensor([[126, 195, 22, 33, 78, 161],
        [124, 228, 116, 161, 88, 102],
        [5, 43, 74, 132, 177, 204]], dtype=torch.int32)
# Change back to a tensor of shape [1,3,3,2]
c = torch.reshape(b,[1,3,3,2]) # b.view([1,3,3,2])
print(c)
tensor([[[[126, 195],
          [22, 33],
          [78, 161]],

         [[124, 228],
          [116, 161],
          [88, 102]],

         [[ 5, 43],
          [74, 132],
          [177, 204]]]], dtype=torch.int32)

If the tensor has only one element in a certain dimension, use torch.squeeze to eliminate this dimension.

The role of torch.unsqueeze is opposite to that of torch.squeeze.

a = torch.tensor([[1.0,2.0]])
s = torch.squeeze(a)
print(a)
print(s)
print(a.shape)
print(s.shape)
tensor([[1., 2.]])
tensor([1., 2.])
torch.Size([1, 2])
torch.Size([2])
#Insert a dimension of length 1 in the 0th dimension

d = torch.unsqueeze(s,axis=0)
print(s)
print(d)

print(s.shape)
print(d.shape)
tensor([1., 2.])
tensor([[1., 2.]])
torch.Size([2])
torch.Size([1, 2])

Torch.transpose can exchange the dimensions of tensors, and torch.transpose is often used for image storage format transformation.

If it is a two-dimensional matrix, the matrix transpose method matrix.t() is usually called, which is equivalent to torch.transpose(matrix,0,1).

minval=0
maxval=255
# Batch,Height,Width,Channel
data = torch.floor(minval + (maxval-minval)*torch.rand([100,256,256,4])).int()
print(data.shape)

# Convert to Pytorch's default picture format Batch, Channel, Height, Width
# Need to exchange twice
data_t = torch.transpose(torch.transpose(data,1,2),1,3)
print(data_t.shape)
torch.Size([100, 256, 256, 4])
torch.Size([100, 4, 256, 256])
matrix = torch.tensor([[1,2,3],[4,5,6]])
print(matrix)
print(matrix.t()) #Equivalent to torch.transpose(matrix,0,1)
tensor([[1, 2, 3],
        [4, 5, 6]])
tensor([[1, 4],
        [2, 5],
        [3, 6]])

Fourth, merge and split

You can use the torch.cat method and the torch.stack method to merge multiple tensors, and you can use the torch.split method to split a tensor into multiple tensors.

There is a slight difference between torch.cat and torch.stack. Torch.cat is a connection and will not increase the dimension, while torch.stack is a stack, which will increase the dimension.

a = torch.tensor([[1.0,2.0],[3.0,4.0]])
b = torch.tensor([[5.0,6.0],[7.0,8.0]])
c = torch.tensor([[9.0,10.0],[11.0,12.0]])

abc_cat = torch.cat([a,b,c],dim = 0)
print(abc_cat.shape)
print(abc_cat)
torch.Size([6, 2])
tensor([[ 1., 2.],
        [3., 4.],
        [5., 6.],
        [7., 8.],
        [9., 10.],
        [11., 12.]])
abc_stack = torch.stack([a,b,c],axis = 0) #The dim and axis parameter names in torch can be mixed
print(abc_stack.shape)
print(abc_stack)
torch.Size([3, 2, 2])
tensor([[[ 1., 2.],
         [3., 4.]],

        [[ 5., 6.],
         [7., 8.]],

        [[ 9., 10.],
         [11., 12.]]])
torch.cat([a,b,c],axis = 1)
tensor([[ 1., 2., 5., 6., 9., 10.],
        [3., 4., 7., 8., 11., 12.]])
torch.stack([a,b,c],axis = 1)
tensor([[[ 1., 2.],
         [5., 6.],
         [9., 10.]],

        [[ 3., 4.],
         [7., 8.],
         [11., 12.]]])

torch.split is the inverse operation of torch.cat. It can be divided equally by specifying the number of divisions, or by specifying the number of records per share.

print(abc_cat)
a,b,c = torch.split(abc_cat,split_size_or_sections = 2, dim = 0) #2 for each part for split
print(a)
print(b)
print(c)
print(abc_cat)
p,q,r = torch.split(abc_cat,split_size_or_sections =[4,1,1],dim = 0) #each part is [4,1,1]
print(p)
print(q)
print(r)
tensor([[ 1., 2.],
        [3., 4.],
        [5., 6.],
        [7., 8.],
        [9., 10.],
        [11., 12.]])
tensor([[1., 2.],
        [3., 4.],
        [5., 6.],
        [7., 8.]])
tensor([[ 9., 10.]])
tensor([[11., 12.]])

If this book is helpful to you and want to encourage the author, remember to add a star to this project, and share it with your friends 😊!

If you need to further communicate with the author on the understanding of the content of this book, please leave a message under the public account "Algorithm Food House" The author has limited time and energy and will respond as appropriate.

You can also reply to keywords in the background of the official account: Add group, join the reader exchange group and discuss with you.

算法美食屋logo.png