- model.parameters()
It has stored W, b. It transferable to optimizer.
Let's create 1 dim of input and 1 dim of output linear model.
model = nn.Linear(1,1)
print(model)
>>>
Linear(in_features=1, out_features=1, bias=True)
And check it out the Weight and Bias.
First value is W, second value is b. Both are subject to learning(requires_grad=True).
print(list(model.parameters()))
>>>
[Parameter containing:
tensor([[0.5153]], requires_grad=True), Parameter containing:
tensor([-0.4414], requires_grad=True)]
Then, declare the optimizer.
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
'Machine Learning' 카테고리의 다른 글
Entropy, Cross-Entropy, Binary cross entropy, SparseCategoricalCrossentropy (0) | 2023.12.01 |
---|---|
Variance VS Bias, Bias&Variance Trade-off (0) | 2023.10.31 |
Supervised vs Unsupervised Learning (0) | 2022.12.12 |
3D Tensor(Typical Natural Language Processing) (0) | 2022.08.02 |
tensorflow (0) | 2022.03.16 |