¡Hola, Mundo!

  • 홈
  • 태그
  • 방명록

BackWard 1

Linear Regression-requires_grad, zero_grad, backward, step

Step 1. Initialize weight and bias : requires_grad=True (if False, This will prevent updating of model weights during fine-tuning.) 2. Set the optimizer and learning rate 3. Set hypothesis 4. Get cost 5. Initialize optimizer : zero_grad() 6. backward : differenciate cost function -> get gradient -> update w, b when backpropagation 7. step : apply learning rate to w, b -> update w, b optimizer(di..

Deep Learning/PyTorch 2022.08.08
이전
1
다음
더보기
  • 분류 전체보기 (352)
    • Autonomous Vehicle (55)
      • Theory (14)
      • Sensors (11)
      • Video Geometry (24)
    • Deep Learning (20)
      • PyTorch (10)
      • Tensorflow (8)
      • CNN (7)
      • Object Detection (16)
    • Machine Learning (18)
    • Reinforcement Learning (4)
    • Analyze Data (29)
      • Python (2)
      • Python Libraries (20)
      • Measure of similarity (7)
    • KAFKA (6)
    • d3.js (20)
    • Environment (9)
      • Anaconda (4)
      • VisualStudioCode (3)
      • Jupyter (2)
    • JavaScript (10)
    • C# (6)
    • Linux (15)
      • terminal (9)
      • Nvidia (3)
    • Docker (17)
    • Git (7)
    • Concept (8)
      • Network (3)
    • Elastic Stack (6)
      • Elasticsearch (5)
      • Logstash (1)
    • Basic Python (49)
      • FastAPI (3)
      • Data Structure (1)
      • Workbook (5)
    • DataBase (11)
      • MYSQL (6)
      • MariaDB (1)
    • Math (20)
    • About me (1)

Tag

classmethod, Step Function, abstractmethod, kafka, textdistance, d3js, axis, yield from, nvidia-smi, 3D Rotation Matrix, forward propagation, global variable, batch size, selectall, zeros, Regular Expression, Sigmoid function, randn, Filter, docker-compose,

최근글과 인기글

  • 최근글
  • 인기글

최근댓글

공지사항

페이스북 트위터 플러그인

  • Facebook
  • Twitter

Archives

Calendar

«   2026/01   »
일 월 화 수 목 금 토
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31

Copyright © Kakao Corp. All rights reserved.

티스토리툴바