- randint
randint(start, end, (tensor size))
torch.randint(3, 5, (3,))
>>>
tensor([4, 3, 4])
torch.randint(3, 10, (2, 2))
>>>
tensor([[4, 5],
[6, 7]])
- scatter_
scatter_(dim, index, value)
y.unsqueeze(1)
>>>
tensor([[3],
[1],
[2]])
y_one_hot.scatter_(1, y.unsqueeze(1),1)
>>>
tensor([[0., 0., 0., 1., 0.], #<--3
[0., 1., 0., 0., 0.], #<--1
[0., 0., 1., 0., 0.]]) #<--2
- cost(multi classification)
All the result is same.
- cost function 1 : log
(y_one_hot * -torch.log(F.softmax(model(x), dim=1))).sum(dim=1).mean()
>>>
tensor(1.7024, grad_fn=<MeanBackward0>)
- cost function 2 : log_softmax
(y_one_hot * - F.log_softmax(model(x), dim=1)).sum(dim=1).mean()
>>>
tensor(1.7024, grad_fn=<MeanBackward0>)
- cost function 3 : nll_loss
Negative Log Likelihood
F.nll_loss(F.log_softmax(model(x), dim=1), y_train)
>>>
tensor(1.7024, grad_fn=<NllLossBackward0>)
- cost function 4 : F.cross_entropy(model(x), y)
F.cross_entropy(model(x), y_train)
>>>
tensor(1.7024, grad_fn=<NllLossBackward0>)
'Deep Learning > PyTorch' 카테고리의 다른 글
PyTorch-permute vs transpose (0) | 2022.12.05 |
---|---|
RNN (0) | 2022.08.22 |
Word2Vec VS Neural networks Emedding (0) | 2022.08.21 |
Linear Regression-requires_grad, zero_grad, backward, step (0) | 2022.08.08 |
PyTorch-view, squeeze, unsqueeze, cat, stack, size (0) | 2022.08.02 |