Keras LSTM:检查模型输入维度时出错

Keras LSTM: Error when checking model input dimension

我是keras的新用户,正在尝试实现一个LSTM模型。为了进行测试,我声明了如下模型,但由于输入维度不同而失败。虽然我在这个站点发现了类似的问题,但我自己找不到我的错误。

ValueError: 
Error when checking model input: 
expected lstm_input_4 to have 3 dimensions, but got array with shape (300, 100)

我的环境

代码

from keras.layers import Input, Dense
from keras.models import Sequential
from keras.layers import LSTM
from keras.optimizers import RMSprop, Adadelta
from keras.layers.wrappers import TimeDistributed
import numpy as np

in_size = 100
out_size = 10
nb_hidden = 8

model = Sequential()
model.add(LSTM(nb_hidden, 
               name='lstm',
               activation='tanh',
               return_sequences=True,
               input_shape=(None, in_size)))
model.add(TimeDistributed(Dense(out_size, activation='softmax')))

adadelta = Adadelta(clipnorm=1.)
model.compile(optimizer=adadelta,
              loss='categorical_crossentropy',
              metrics=['accuracy'])

# create dummy data
data_size = 300
train = np.zeros((data_size, in_size,), dtype=np.float32)
labels = np.zeros((data_size, out_size,), dtype=np.float32)
model.fit(train, labels)

编辑 1(无效,在 Marcin Możejko 发表评论后)

谢谢 Marcin Możejko。但是我有类似下面的错误。我更新了虚拟数据以供检查。这段代码有什么问题?

ValueError: Error when checking model target: expected timedistributed_36 to have 3 dimensions, but got array with shape (208, 1)

def create_dataset(X, Y, loop_back=1):
    dataX, dataY = [], []
    for i in range(len(X) - loop_back-1):
        a = X[i:(i+loop_back), :]
        dataX.append(a)
        dataY.append(Y[i+loop_back, :])
    return np.array(dataX), np.array(dataY)

data_size = 300
dataset = np.zeros((data_size, feature_size), dtype=np.float32)
dataset_labels = np.zeros((data_size, 1), dtype=np.float32)

train_size = int(data_size * 0.7)
trainX = dataset[0:train_size, :]
trainY = dataset_labels[0:train_size, :]
testX = dataset[train_size:, :]
testY = dataset_labels[train_size:, 0]
trainX, trainY = create_dataset(trainX, trainY)
print(trainX.shape, trainY.shape) # (208, 1, 1) (208, 1)

# in_size = 100
feature_size = 1
out_size = 1
nb_hidden = 8

model = Sequential()
model.add(LSTM(nb_hidden, 
               name='lstm',
               activation='tanh',
               return_sequences=True,
               input_shape=(1, feature_size)))

model.add(TimeDistributed(Dense(out_size, activation='softmax')))
adadelta = Adadelta(clipnorm=1.)
model.compile(optimizer=adadelta,
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.fit(trainX, trainY, nb_epoch=10, batch_size=1)

这是KerasLSTM的一个非常经典的问题。 LSTM 输入形状应为 2d - 形状为 (sequence_length, nb_of_features)。额外的第三个维度来自示例维度 - 因此输入模型的 table 具有 (nb_of_examples, sequence_length, nb_of_features) 的形状。这就是您的问题所在。请记住,1-d 序列应表示为形状为 (sequence_length, 1)2-d 数组。这应该是你的 LSTM:

的输入形状
model.add(LSTM(nb_hidden, 
           name='lstm',
           activation='tanh',
           return_sequences=True,
           input_shape=(in_size, 1)))

并且记得 reshape 您输入的内容要采用适当的格式。