Deep Learning 60

Keras-Preprocessing, One-hot encoding, Word Embedding , Modeling, Compile

Keras 1. Preprocessing from tensorflow.keras.preprocessing.text import Tokenizer t=Tokenizer() fit_text='The earth is an awesome place live' t.fit_on_texts([fit_text]) test_text='The earth is an great place live' sequences=t.texts_to_sequences([test_text])[0] sequences >>>[1, 2, 3, 4, 6, 7] t.word_index >>>{'an': 4, 'awesome': 5, 'earth': 2, 'is': 3, 'live': 7, 'place': 6, 'the': 1} Tokenizer.fi..

Dropout, Gradient Clipping, Weight Initialization, Xavier, He, Batch Normalization, Internal Covariate Shift, Layer Normalization

Dropout - One way to avoid overfitting, using several neurons only rather than whole neurons when machine training. - It does not have any learnable parameters and has only one hyperparameter. - It does not behave similarly during training and testing. To be understand this, let us consider a simple fully connected layer containing 10 neurons. We are using a dropout probability of 0.5. Well duri..

Deep Learning 2021.04.08