如何在 Keras python 中输入 LSTM 模型?

how to feed LSTM model in Keras python?

我读过 LSTM,我知道该算法采用前一个词的值并在下一个词参数中考虑它

现在我正在尝试应用我的第一个 LSTM 算法

我有这个代码。

model = Sequential()
model.add(LSTM(units=6, input_shape = (X_train_count.shape[0], X_train_count.shape[1]), return_sequences = True))
model.add(LSTM(units=6, return_sequences=True))
model.add(LSTM(units=6, return_sequences=True))
model.add(LSTM(units=ytrain.shape[1], return_sequences=True, name='output'))
model.compile(loss='cosine_proximity', optimizer='sgd', metrics = ['accuracy'])



model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['acc'])
model.summary()

cp=ModelCheckpoint('model_cnn.hdf5',monitor='val_acc',verbose=1,save_best_only=True)

model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['acc'])
model.summary()

cp=ModelCheckpoint('model_cnn.hdf5',monitor='val_acc',verbose=1,save_best_only=True)


history = model.fit(X_train_count, ytrain,
                    epochs=20,
                    verbose=False,
                    validation_data=(X_test_count, yval),
                    batch_size=10,
                    callbacks=[cp])

1- 当我的数据集基于 TFIDF 构建时,我看不出 LSTM 如何知道单词序列?

2- 我收到错误

ValueError: Input 0 of layer sequential_8 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 18644]

问题似乎出在 X_train_count 你接受 LSTM 输入的形状总是很棘手。

如果您的 X_train_count 不是 3D 格式,则使用下面的行重新整形。

X_train_count=X_train_count.reshape(X_train_count.shape[0],X_train_count.shape[1],1))

在LSTM层中,input_shape应该是(timesteps, data_dim)

下面是说明相同内容的示例。

from sklearn.feature_extraction.text import TfidfVectorizer
import tensorflow as tf
from tensorflow import keras
from sklearn.model_selection import train_test_split

X = ["first example","one more","good morning"]
Y = ["first example","one more","good morning"]

vectorizer = TfidfVectorizer().fit(X)

tfidf_vector_X = vectorizer.transform(X).toarray() 
tfidf_vector_Y = vectorizer.transform(Y).toarray() 
tfidf_vector_X = tfidf_vector_X[:, :, None] 
tfidf_vector_Y = tfidf_vector_Y[:, :, None] 

X_train, X_test, y_train, y_test = train_test_split(tfidf_vector_X, tfidf_vector_Y, test_size = 0.2, random_state = 1)

from tensorflow.keras import Sequential
from tensorflow.keras.layers import LSTM

model = Sequential()
model.add(LSTM(units=6, input_shape = X_train.shape[1:], return_sequences = True))
model.add(LSTM(units=6, return_sequences=True))
model.add(LSTM(units=6, return_sequences=True))
model.add(LSTM(units=1, return_sequences=True, name='output'))
model.compile(loss='cosine_proximity', optimizer='sgd', metrics = ['accuracy'])

模型总结:

Model: "sequential_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_9 (LSTM)                (None, 6, 6)              192       
_________________________________________________________________
lstm_10 (LSTM)               (None, 6, 6)              312       
_________________________________________________________________
lstm_11 (LSTM)               (None, 6, 6)              312       
_________________________________________________________________
output (LSTM)                (None, 6, 1)              32        
=================================================================
Total params: 848
Trainable params: 848
Non-trainable params: 0
_________________________________________________________________
None  

这里X_train的形状是(2, 6, 1)

要添加到解决方案中,我想建议使用密集向量而不是从 Tf-Idf 方法表示生成的稀疏向量,方法是用 pre-trained 模型替换 [=19] =] 或 Glove 作为嵌入层的权重,这在性能和结果方面会更好。