Keras 1. Preprocessing from tensorflow.keras.preprocessing.text import Tokenizer t=Tokenizer() fit_text='The earth is an awesome place live' t.fit_on_texts([fit_text]) test_text='The earth is an great place live' sequences=t.texts_to_sequences([test_text])[0] sequences >>>[1, 2, 3, 4, 6, 7] t.word_index >>>{'an': 4, 'awesome': 5, 'earth': 2, 'is': 3, 'live': 7, 'place': 6, 'the': 1} Tokenizer.fi..