Deep Learning 60

Forward Propagation, Batch Gradient, Stochastic Gradient Descent, SGD, Mini Batch Gradient Descent, Momentum, Adagrad, Rprop, RMSprop, Adam, Epoch, Batch size, Iteration

Forward Propagation Input layer-->hidden layer-->activation function-->output layer --------------------------------------------------------------> in order The input data is fed in the forward direction through the network. Each hidden layer accepts the input data, processes it as per the activation function and passes to the successive layer. In order to generate output, the input data should ..

Deep Learning 2021.04.06

Perceptron, Step function, Single-Layer Perceptron, Multi-Layer Perceptron, DNN

Perceptron It is a linear classifier, an algorithm for supervised learning of binary classifiers. Input(multiple x)-->Output(one y) x : input W : Weight y : output Each x has each weights. Larger w, more important x. Step function ∑W * x >=threshold(θ)-->output(y) : 1 ∑W * x output(y) : 0 Threshold(θ) can be expressed b(bias) such as Single-Layer Perceptron It can learn only linearly separable p..

Deep Learning 2021.03.31

LSA, SVD, Orthogonal matrix, Transposed matrix, Identity matrix, Inverse matrix, Diagonal matrix, Truncated SVD

LSA Latent Semantic Analysis, substitute for DTM, TF-IDF(2021.03.10 - [Deep Learning] - BoW, CountVectorizer, fit_transform, vocabulary_, DTM, TDM, TF-IDF, TfidfVectorizer, isnull, fillna, pd.Series) which has not consider meaning of terms. It applies SVD based on DTM, TF-IDF and reduce dimensions, eliciting potential meaning of words. 1. SVD Singular Value Decomposition, it refers to the decomp..

Deep Learning 2021.03.11