My neural networks predict is giving me an error: IndexError: list index out of range
My neural networks predict is giving me an error: IndexError: list index out of range
我正在做一个简单的ham/spam文本分类。我的 Keras NN 得到了正确的训练和评估;但是,当我尝试预测以下格式的新文本时,出现“IndexError:列表索引超出范围”错误:
model.predict(cleaning_funcs('my bus departs in five minutes'))
如果有帮助,我还使用了以下内容:
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(x_train)
x_train = tokenizer.texts_to_sequences(x_train)
x_test = tokenizer.texts_to_sequences(x_test)
vocab_size = len(tokenizer.word_index) + 1
print(x_train[2])
from keras.preprocessing.sequence import pad_sequences
maxlen = 100
x_train = pad_sequences(x_train, padding='post', maxlen=maxlen)
x_test = pad_sequences(x_test, padding='post', maxlen=maxlen)
我假设你的 cleaning_funcs 不是 return 数组,预测函数需要一个数组 try
model.predict([cleaning_funcs('my bus departs in five minutes')])
更多信息https://www.tensorflow.org/api_docs/python/tf/keras/Model#predict
我正在做一个简单的ham/spam文本分类。我的 Keras NN 得到了正确的训练和评估;但是,当我尝试预测以下格式的新文本时,出现“IndexError:列表索引超出范围”错误:
model.predict(cleaning_funcs('my bus departs in five minutes'))
如果有帮助,我还使用了以下内容:
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(x_train)
x_train = tokenizer.texts_to_sequences(x_train)
x_test = tokenizer.texts_to_sequences(x_test)
vocab_size = len(tokenizer.word_index) + 1
print(x_train[2])
from keras.preprocessing.sequence import pad_sequences
maxlen = 100
x_train = pad_sequences(x_train, padding='post', maxlen=maxlen)
x_test = pad_sequences(x_test, padding='post', maxlen=maxlen)
我假设你的 cleaning_funcs 不是 return 数组,预测函数需要一个数组 try
model.predict([cleaning_funcs('my bus departs in five minutes')])
更多信息https://www.tensorflow.org/api_docs/python/tf/keras/Model#predict