¡Hola, Mundo!

  • 홈
  • 태그
  • 방명록

Attention 1

Attention

The idea behind the attention mechanism is that the decoder refer to entire input statement of encoder at every steps. It will focus on the word(from encoder) which has more related to the word(decoder) to be predicted. The result from the Softmax helps when Decoder predict the output word. The size of red rectangle represents how it helps to predict. The larger the rectangle, more helpful to pr..

Deep Learning 2023.03.31
이전
1
다음
더보기
  • 분류 전체보기 (352)
    • Autonomous Vehicle (55)
      • Theory (14)
      • Sensors (11)
      • Video Geometry (24)
    • Deep Learning (20)
      • PyTorch (10)
      • Tensorflow (8)
      • CNN (7)
      • Object Detection (16)
    • Machine Learning (18)
    • Reinforcement Learning (4)
    • Analyze Data (29)
      • Python (2)
      • Python Libraries (20)
      • Measure of similarity (7)
    • KAFKA (6)
    • d3.js (20)
    • Environment (9)
      • Anaconda (4)
      • VisualStudioCode (3)
      • Jupyter (2)
    • JavaScript (10)
    • C# (6)
    • Linux (15)
      • terminal (9)
      • Nvidia (3)
    • Docker (17)
    • Git (7)
    • Concept (8)
      • Network (3)
    • Elastic Stack (6)
      • Elasticsearch (5)
      • Logstash (1)
    • Basic Python (49)
      • FastAPI (3)
      • Data Structure (1)
      • Workbook (5)
    • DataBase (11)
      • MYSQL (6)
      • MariaDB (1)
    • Math (20)
    • About me (1)

Tag

textdistance, classmethod, zeros, Sigmoid function, randn, yield from, Step Function, 3D Rotation Matrix, kafka, nvidia-smi, abstractmethod, docker-compose, axis, Filter, Regular Expression, batch size, global variable, selectall, forward propagation, d3js,

최근글과 인기글

  • 최근글
  • 인기글

최근댓글

공지사항

페이스북 트위터 플러그인

  • Facebook
  • Twitter

Archives

Calendar

«   2025/12   »
일 월 화 수 목 금 토
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31

Copyright © Kakao Corp. All rights reserved.

티스토리툴바