- validation_data
it shows the accuracy whether training is going well or no, it doesn't train validation data. Once loss is turn from goes down to goes up, it is a signal of overfitting.
model=Sequential()
model.add(Dense(3,input_dim=4, activation='softmax', input_shape=X_train[0].shape))
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
history=model.fit(X_train, y_train, epochs=30, batch_size=32, validation_data=(X_test,y_test))
>>>
4/4 [==============================] - 0s 13ms/step - loss: 1.3460 - accuracy: 0.6750 - val_loss: 1.6881 - val_accuracy: 0.5667
Epoch 4/30
...
- to_categorical
Converts vector to one-hot incoding.
- num_classes : Total number of classes. If None, this would be inferred as max(y) + 1.
sub_text = "점심 먹으러 갈래 메뉴는 햄버거 최고야"
[2, 5, 1, 6, 3, 7]
one_hot = to_categorical(encoded)
print(one_hot)
>>>
[[0. 0. 1. 0. 0. 0. 0. 0.] # 인덱스 2의 원-핫 벡터
[0. 0. 0. 0. 0. 1. 0. 0.] # 인덱스 5의 원-핫 벡터
[0. 1. 0. 0. 0. 0. 0. 0.] # 인덱스 1의 원-핫 벡터
[0. 0. 0. 0. 0. 0. 1. 0.] # 인덱스 6의 원-핫 벡터
[0. 0. 0. 1. 0. 0. 0. 0.] # 인덱스 3의 원-핫 벡터
[0. 0. 0. 0. 0. 0. 0. 1.]] # 인덱스 7의 원-핫 벡터
- evaluate
Returns the loss & metrics values.
model.evaluate(X_test, y_test, batch_size=12, verbose=0)
'Machine Learning' 카테고리의 다른 글
Supervised vs Unsupervised Learning (0) | 2022.12.12 |
---|---|
3D Tensor(Typical Natural Language Processing) (0) | 2022.08.02 |
Glance ML (0) | 2022.03.11 |
Entropy, Cross-Entropy (0) | 2021.03.31 |
Support Vector Machine, Margin, Kernel, Regularization, Gamma (0) | 2021.03.30 |