tensorflow 自动编码器无法在训练中学习
tensorflow autoencoder can not learn in training
在我的代码中,我正在尝试练习使用 tr.train.batch 函数。在 sess.run([optimizer])
行中,它没有 return 任何东西,它只是被冻结了。你能找到我的错误吗?
tensors = tf.convert_to_tensor(x_train, dtype=tf.float32)
tensors = tf.reshape(tensors, shape=x_train.shape)
batch = tf.train.batch([tensors], batch_size=BATCH_SIZE, enqueue_many=True)
# Weights and biases to hidden layer
Wh = tf.Variable(tf.random_normal([COLUMN-2, UNITS_OF_HIDDEN_LAYER], mean=0.0, stddev=0.05))
bh = tf.Variable(tf.zeros([UNITS_OF_HIDDEN_LAYER]))
h = tf.nn.tanh(tf.matmul(batch, Wh) + bh)
# Weights and biases to output layer
Wo = tf.transpose(Wh) # tied weights
bo = tf.Variable(tf.zeros([COLUMN-2]))
y = tf.nn.tanh(tf.matmul(h, Wo) + bo)
# Objective functions
mean_sqr = tf.reduce_mean(tf.pow(batch - y, 2))
optimizer = tf.train.AdamOptimizer(LEARNING_RATE).minimize(mean_sqr)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for j in range(TRAINING_EPOCHS):
sess.run([optimizer])
print("optimizer: ")
tf.train.batch
是一个队列,所以需要在会话中使用tf.train.start_queue_runners
启动队列。您可以在 tensorflow 的 threading and Queues guide 中了解这一点。
进行以下更改:
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
try:
# Training loop
for j in range(TRAINING_EPOCHS):
if coord.should_stop():
break
sess.run([optimizer])
print("optimizer: ")
except Exception, e:
# When done, ask the threads to stop.
coord.request_stop(e)
finally:
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)
在我的代码中,我正在尝试练习使用 tr.train.batch 函数。在 sess.run([optimizer])
行中,它没有 return 任何东西,它只是被冻结了。你能找到我的错误吗?
tensors = tf.convert_to_tensor(x_train, dtype=tf.float32)
tensors = tf.reshape(tensors, shape=x_train.shape)
batch = tf.train.batch([tensors], batch_size=BATCH_SIZE, enqueue_many=True)
# Weights and biases to hidden layer
Wh = tf.Variable(tf.random_normal([COLUMN-2, UNITS_OF_HIDDEN_LAYER], mean=0.0, stddev=0.05))
bh = tf.Variable(tf.zeros([UNITS_OF_HIDDEN_LAYER]))
h = tf.nn.tanh(tf.matmul(batch, Wh) + bh)
# Weights and biases to output layer
Wo = tf.transpose(Wh) # tied weights
bo = tf.Variable(tf.zeros([COLUMN-2]))
y = tf.nn.tanh(tf.matmul(h, Wo) + bo)
# Objective functions
mean_sqr = tf.reduce_mean(tf.pow(batch - y, 2))
optimizer = tf.train.AdamOptimizer(LEARNING_RATE).minimize(mean_sqr)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for j in range(TRAINING_EPOCHS):
sess.run([optimizer])
print("optimizer: ")
tf.train.batch
是一个队列,所以需要在会话中使用tf.train.start_queue_runners
启动队列。您可以在 tensorflow 的 threading and Queues guide 中了解这一点。
进行以下更改:
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
try:
# Training loop
for j in range(TRAINING_EPOCHS):
if coord.should_stop():
break
sess.run([optimizer])
print("optimizer: ")
except Exception, e:
# When done, ask the threads to stop.
coord.request_stop(e)
finally:
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)