张量流中 nd 数组输入的占位符定义
placeholders definition for nd-array input in tensorflow
我正在尝试根据本指南构建 LSTM RNN:
http://monik.in/a-noobs-guide-to-implementing-rnn-lstm-using-tensorflow/
我的输入是 ndarray,大小为 89102*39(89102 行,39 个特征)。数据有 3 个标签 - 0,1,2
我似乎对占位符定义有疑问,但我不确定它是什么。
我的代码是:
data = tf.placeholder(tf.float32, [None, 1000, 39])
target = tf.placeholder(tf.float32, [None, 3])
cell = tf.nn.rnn_cell.LSTMCell(self.num_hidden)
val, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)
val = tf.transpose(val, [1, 0, 2])
last = tf.gather(val, int(val.get_shape()[0]) - 1)
weight = tf.Variable(tf.truncated_normal([self.num_hidden, int(target.get_shape()[1])]))
bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))
prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
cross_entropy = -tf.reduce_sum(target * tf.log(tf.clip_by_value(prediction, 1e-10, 1.0)))
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cross_entropy)
mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))
error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
batch_size = 1000
no_of_batches = int(len(train_input) / batch_size)
epoch = 5000
for i in range(epoch):
ptr = 0
for j in range(no_of_batches):
inp, out = train_input[ptr:ptr + batch_size], train_output[ptr:ptr + batch_size]
ptr += batch_size
sess.run(minimize, {data: inp, target: out})
print( "Epoch - ", str(i))
我遇到了以下错误:
File , line 133, in execute_graph
sess.run(minimize, {data: inp, target: out})
File "/usr/local/lib/python3.5/dist-
packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1000, 39) for Tensor 'Placeholder:0', which has shape '(1000, 89102, 39)'
知道可能导致问题的原因吗?
如 here 所示,dynamic_rnn
函数采用形状为
的批输入
[batch_size, truncated_backprop_length, input_size]
在您提供的link中,占位符的形状是
data = tf.placeholder(tf.float32, [None, 20,1])
这意味着他们选择了truncated_backprop_length=20
和input_size=1
。
他们的数据是以下 3D
数组:
[
array([[0],[0],[1],[0],[0],[1],[0],[1],[1],[0],[0],[0],[1],[1],[1],[1],[1],[1],[0],[0]]),
array([[1],[1],[0],[0],[0],[0],[1],[1],[1],[1],[1],[0],[0],[1],[0],[0],[0],[1],[0],[1]]),
.....
]
根据您的代码,train_input
似乎是一个 2D
数组而不是 3D
数组。因此,您需要将其转换为 3D
数组。为此,您需要决定要为 truncated_backprop_length
和 input_size
使用哪些参数。之后,您需要定义
data
适当。
比如你想让truncated_backprop_length
和input_size
分别为39和1,你可以做
import numpy as np
train_input=np.reshape(train_input,(len(train_input),39,1))
data = tf.placeholder(tf.float32, [None, 39,1])
我根据上面的讨论更改了您的代码,并 运行 它基于我生成的一些随机数据。它 运行s 没有抛出错误。请看下面的代码:
import tensorflow as tf
import numpy as np
num_hidden=5
train_input=np.random.rand(89102,39)
train_input=np.reshape(train_input,(len(train_input),39,1))
train_output=np.random.rand(89102,3)
data = tf.placeholder(tf.float32, [None, 39, 1])
target = tf.placeholder(tf.float32, [None, 3])
cell = tf.nn.rnn_cell.LSTMCell(num_hidden)
val, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)
val = tf.transpose(val, [1, 0, 2])
last = tf.gather(val, int(val.get_shape()[0]) - 1)
weight = tf.Variable(tf.truncated_normal([num_hidden, int(target.get_shape()[1])]))
bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))
prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
cross_entropy = -tf.reduce_sum(target * tf.log(tf.clip_by_value(prediction, 1e-10, 1.0)))
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cross_entropy)
mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))
error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
batch_size = 1000
no_of_batches = int(len(train_input) / batch_size)
epoch = 5000
for i in range(epoch):
ptr = 0
for j in range(no_of_batches):
inp, out = train_input[ptr:ptr + batch_size], train_output[ptr:ptr + batch_size]
ptr += batch_size
sess.run(minimize, {data: inp, target: out})
print( "Epoch - ", str(i))
我正在尝试根据本指南构建 LSTM RNN: http://monik.in/a-noobs-guide-to-implementing-rnn-lstm-using-tensorflow/ 我的输入是 ndarray,大小为 89102*39(89102 行,39 个特征)。数据有 3 个标签 - 0,1,2 我似乎对占位符定义有疑问,但我不确定它是什么。
我的代码是:
data = tf.placeholder(tf.float32, [None, 1000, 39])
target = tf.placeholder(tf.float32, [None, 3])
cell = tf.nn.rnn_cell.LSTMCell(self.num_hidden)
val, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)
val = tf.transpose(val, [1, 0, 2])
last = tf.gather(val, int(val.get_shape()[0]) - 1)
weight = tf.Variable(tf.truncated_normal([self.num_hidden, int(target.get_shape()[1])]))
bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))
prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
cross_entropy = -tf.reduce_sum(target * tf.log(tf.clip_by_value(prediction, 1e-10, 1.0)))
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cross_entropy)
mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))
error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
batch_size = 1000
no_of_batches = int(len(train_input) / batch_size)
epoch = 5000
for i in range(epoch):
ptr = 0
for j in range(no_of_batches):
inp, out = train_input[ptr:ptr + batch_size], train_output[ptr:ptr + batch_size]
ptr += batch_size
sess.run(minimize, {data: inp, target: out})
print( "Epoch - ", str(i))
我遇到了以下错误:
File , line 133, in execute_graph
sess.run(minimize, {data: inp, target: out})
File "/usr/local/lib/python3.5/dist-
packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1000, 39) for Tensor 'Placeholder:0', which has shape '(1000, 89102, 39)'
知道可能导致问题的原因吗?
如 here 所示,dynamic_rnn
函数采用形状为
[batch_size, truncated_backprop_length, input_size]
在您提供的link中,占位符的形状是
data = tf.placeholder(tf.float32, [None, 20,1])
这意味着他们选择了truncated_backprop_length=20
和input_size=1
。
他们的数据是以下 3D
数组:
[
array([[0],[0],[1],[0],[0],[1],[0],[1],[1],[0],[0],[0],[1],[1],[1],[1],[1],[1],[0],[0]]),
array([[1],[1],[0],[0],[0],[0],[1],[1],[1],[1],[1],[0],[0],[1],[0],[0],[0],[1],[0],[1]]),
.....
]
根据您的代码,train_input
似乎是一个 2D
数组而不是 3D
数组。因此,您需要将其转换为 3D
数组。为此,您需要决定要为 truncated_backprop_length
和 input_size
使用哪些参数。之后,您需要定义
data
适当。
比如你想让truncated_backprop_length
和input_size
分别为39和1,你可以做
import numpy as np
train_input=np.reshape(train_input,(len(train_input),39,1))
data = tf.placeholder(tf.float32, [None, 39,1])
我根据上面的讨论更改了您的代码,并 运行 它基于我生成的一些随机数据。它 运行s 没有抛出错误。请看下面的代码:
import tensorflow as tf
import numpy as np
num_hidden=5
train_input=np.random.rand(89102,39)
train_input=np.reshape(train_input,(len(train_input),39,1))
train_output=np.random.rand(89102,3)
data = tf.placeholder(tf.float32, [None, 39, 1])
target = tf.placeholder(tf.float32, [None, 3])
cell = tf.nn.rnn_cell.LSTMCell(num_hidden)
val, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)
val = tf.transpose(val, [1, 0, 2])
last = tf.gather(val, int(val.get_shape()[0]) - 1)
weight = tf.Variable(tf.truncated_normal([num_hidden, int(target.get_shape()[1])]))
bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))
prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
cross_entropy = -tf.reduce_sum(target * tf.log(tf.clip_by_value(prediction, 1e-10, 1.0)))
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cross_entropy)
mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))
error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
batch_size = 1000
no_of_batches = int(len(train_input) / batch_size)
epoch = 5000
for i in range(epoch):
ptr = 0
for j in range(no_of_batches):
inp, out = train_input[ptr:ptr + batch_size], train_output[ptr:ptr + batch_size]
ptr += batch_size
sess.run(minimize, {data: inp, target: out})
print( "Epoch - ", str(i))