pythontensorflow 2.0不使用Keras搭建一个简单的LSTM网络

python tensorflow 2.0 build a simple LSTM network without using Keras

我试图在不使用 Keras 的情况下构建一个 tensorflow LSTM 网络 API。模型很简单:

  1. 4个单词索引序列的输入
  2. 嵌入输入 100 dim 词向量
  3. 通过LSTM层
  4. 输出 4 个单词序列的密集层

损失函数是序列损失。

我有以下代码:

# input
input_placeholder = tf.placeholder(tf.int32, shape=[config.batch_size, config.num_steps], name='Input')
labels_placeholder = tf.placeholder(tf.int32, shape=[config.batch_size, config.num_steps], name='Target')

# embedding
embedding = tf.get_variable('Embedding', initializer=embedding_matrix, trainable=False)
inputs = tf.nn.embedding_lookup(embedding, input_placeholder)
inputs = [tf.squeeze(x, axis=1) for x in tf.split(inputs, config.num_steps, axis=1)]

# LSTM
initial_state = tf.zeros([config.batch_size, config.hidden_size])
lstm_cell = tf.nn.rnn_cell.LSTMCell(config.hidden_size)
output, _ = tf.keras.layers.RNN(lstm_cell, inputs, dtype=tf.float32, unroll=True)

# loss op
all_ones = tf.ones([config.batch_size, config.num_steps])
cross_entropy = tfa.seq2seq.sequence_loss(output, labels_placeholder, all_ones, vocab_size)
tf.add_to_collection('total_loss', cross_entropy)
loss = tf.add_n(tf.get_collection('total_loss'))

# projection (dense)
proj_U = tf.get_variable('Matrix', [config.hidden_size, vocab_size])
proj_b = tf.get_variable('Bias', [vocab_size])
outputs = [tf.matmul(o, proj_U) + proj_b for o in output]

我现在遇到的问题是在 LSTM 部分:

# tensorflow 1.x
output, _ = tf.contrib.rnn.static_rnn(
        lstm_cell, inputs, dtype = tf.float32, 
        sequence_length = [config.num_steps]*config.batch_size)

我在将其转换为 tensorlow 2 时遇到问题。在上面的代码中,我收到以下错误:

--------------------------------------------------------------------------- TypeError Traceback (most recent call last) in ----> 1 outputs, _ = tf.keras.layers.RNN(lstm_cell, inputs, dtype=tf.float32, unroll=True)

TypeError: cannot unpack non-iterable RNN object

下面的代码应该适用于 TensorFlow 2.X。

import tensorflow as tf
# input
input_placeholder = tf.compat.v1.placeholder(tf.int32, shape=[config.batch_size, config.num_steps], name='Input')
labels_placeholder = tf.compat.v1.placeholder(tf.int32, shape=[config.batch_size, config.num_steps], name='Target')

# embedding
embedding = tf.compat.v1.get_variable('Embedding', initializer=embedding_matrix, trainable=False)
inputs = tf.nn.embedding_lookup(params=embedding, ids=input_placeholder)
inputs = [tf.squeeze(x, axis=1) for x in tf.split(inputs, config.num_steps, axis=1)]

# LSTM
initial_state = tf.zeros([config.batch_size, config.hidden_size])
lstm_cell = tf.compat.v1.nn.rnn_cell.LSTMCell(config.hidden_size)
output, _ = tf.keras.layers.RNN(lstm_cell, inputs, dtype=tf.float32, unroll=True)

# loss op
all_ones = tf.ones([config.batch_size, config.num_steps])
cross_entropy = tfa.seq2seq.sequence_loss(output, labels_placeholder, all_ones, vocab_size)
tf.compat.v1.add_to_collection('total_loss', cross_entropy)
loss = tf.add_n(tf.compat.v1.get_collection('total_loss'))

# projection (dense)
proj_U = tf.compat.v1.get_variable('Matrix', [config.hidden_size, vocab_size])
proj_b = tf.compat.v1.get_variable('Bias', [vocab_size])
outputs = [tf.matmul(o, proj_U) + proj_b for o in output]

# tensorflow 1.x
output, _ = tf.compat.v1.nn.static_rnn(
        lstm_cell, inputs, dtype = tf.float32, 
        sequence_length = [config.num_steps]*config.batch_size)