使用 Tensorflow 构建 RNN。如何正确预处理我的数据集以匹配 RNN 的输入和输出形状?
Building RNN with Tensorflow. How do I preprocess my dataset correctly to match the RNN's input and output shape?
我正在从事一个关于从音频中检测鼓点的项目。我目前已经对我的训练数据进行了预处理,并尝试在 tensorflow 中将一个 SimpleRNN 神经网络放在一起,但无法让两者一起工作。
在每个时间步,我的输入包含一个形状为 (84) 的一维张量,输出应该是一个形状为 (3) 的张量。
目前我的代码如下所示:
train_epochs = 10
batch_num = 10
learning_Rate = 0.001
''' I also tried using tf.dataset but couldn't get it to work
train_dataset = dataset.batch(batch_num, drop_remainder=True)
test_dataset = dataset.take(10000).batch(batch_num, drop_remainder=True)
print(train_dataset.element_spec)
'''
x_data = x_data[:70000]
y_data = y_data[:70000]
x_data.resize((70000, 84))
y_data.resize((70000, 3))
print(x_data.shape, y_data.shape)
model = keras.Sequential()
model.add(keras.Input(shape=(None,84)))
model.add(layers.SimpleRNN(200,activation='relu', dropout=0.2))
model.add(layers.Dense(3, activation='sigmoid'))
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=learning_Rate),
loss=keras.losses.BinaryCrossentropy(),
#metrics F measure
metrics=['acc',f1_m,precision_m, recall_m]
)
model.summary()
history = model.fit(
x_data,y_data,
epochs=train_epochs,
batch_size=batch_num,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_data, y_data)
)
print("Evaluate on test data")
results = model.evaluate(test_dataset)
print("test loss, test acc:", results)
当我执行它时,它给我错误信息:
ValueError: Input 0 of layer sequential_35 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (10, 84)
如果我将 x_data 和 y_data 更改为形状 (7000,10, 84) (7000,10, 3),则错误消息变为
ValueError: logits and labels must have the same shape ((10, 3) vs (10, 10, 3))
我该如何解决这个问题?我是深度学习的新手,因此非常感谢任何有关如何开展该项目的建议。
simpleRNN 的输入应该是 3D:
x_data.resize((70000, 84, 1))
我正在从事一个关于从音频中检测鼓点的项目。我目前已经对我的训练数据进行了预处理,并尝试在 tensorflow 中将一个 SimpleRNN 神经网络放在一起,但无法让两者一起工作。
在每个时间步,我的输入包含一个形状为 (84) 的一维张量,输出应该是一个形状为 (3) 的张量。
目前我的代码如下所示:
train_epochs = 10
batch_num = 10
learning_Rate = 0.001
''' I also tried using tf.dataset but couldn't get it to work
train_dataset = dataset.batch(batch_num, drop_remainder=True)
test_dataset = dataset.take(10000).batch(batch_num, drop_remainder=True)
print(train_dataset.element_spec)
'''
x_data = x_data[:70000]
y_data = y_data[:70000]
x_data.resize((70000, 84))
y_data.resize((70000, 3))
print(x_data.shape, y_data.shape)
model = keras.Sequential()
model.add(keras.Input(shape=(None,84)))
model.add(layers.SimpleRNN(200,activation='relu', dropout=0.2))
model.add(layers.Dense(3, activation='sigmoid'))
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=learning_Rate),
loss=keras.losses.BinaryCrossentropy(),
#metrics F measure
metrics=['acc',f1_m,precision_m, recall_m]
)
model.summary()
history = model.fit(
x_data,y_data,
epochs=train_epochs,
batch_size=batch_num,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_data, y_data)
)
print("Evaluate on test data")
results = model.evaluate(test_dataset)
print("test loss, test acc:", results)
当我执行它时,它给我错误信息:
ValueError: Input 0 of layer sequential_35 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (10, 84)
如果我将 x_data 和 y_data 更改为形状 (7000,10, 84) (7000,10, 3),则错误消息变为
ValueError: logits and labels must have the same shape ((10, 3) vs (10, 10, 3))
我该如何解决这个问题?我是深度学习的新手,因此非常感谢任何有关如何开展该项目的建议。
simpleRNN 的输入应该是 3D:
x_data.resize((70000, 84, 1))