CNTK Sequence model error: Different minibatch layouts detected
CNTK Sequence model error: Different minibatch layouts detected
我正在尝试使用 CNTK 训练一个模型,该模型接受两个输入序列并输出一个二维标量标签。我这样定义模型:
def create_seq_model(num_tokens):
with C.default_options(init=C.glorot_uniform()):
i1 = sequence.input(shape=num_tokens, is_sparse=True, name='i1')
i2 = sequence.input(shape=num_tokens, is_sparse=True, name='i2')
s1 = Sequential([Embedding(300), Fold(GRU(64))])(i1)
s2 = Sequential([Embedding(300), Fold(GRU(64))])(i2)
combined = splice(s1, s2)
model = Sequential([Dense(64, activation=sigmoid),
Dropout(0.1, seed=42),
Dense(2, activation=softmax)])
return model(combined)
我已经将我的数据转换为 CTF 格式。当我尝试使用以下代码片段(根据示例 here 进行了非常轻微的修改)对其进行训练时,出现错误:
def train(reader, model, max_epochs=16):
criterion = create_criterion_function(model)
criterion.replace_placeholders({criterion.placeholders[0]: C.input(2, name='labels')})
epoch_size = 500000
minibatch_size=128
lr_per_sample = [0.003]*4+[0.0015]*24+[0.0003]
lr_per_minibatch= [x*minibatch_size for x in lr_per_sample]
lr_schedule = learning_rate_schedule(lr_per_minibatch, UnitType.minibatch, epoch_size)
momentum_as_time_constant = momentum_as_time_constant_schedule(700)
learner = fsadagrad(criterion.parameters,
lr=lr_schedule, momentum=momentum_as_time_constant,
gradient_clipping_threshold_per_sample=15,
gradient_clipping_with_truncation=True)
progress_printer = ProgressPrinter(freq=1000, first=10, tag='Training', num_epochs=max_epochs)
trainer = Trainer(model, criterion, learner, progress_printer)
log_number_of_parameters(model)
t = 0
for epoch in range(max_epochs):
epoch_end = (epoch+1) * epoch_size
while(t < epoch_end):
data = reader.next_minibatch(minibatch_size, input_map={
criterion.arguments[0]: reader.streams.i1,
criterion.arguments[1]: reader.streams.i2,
criterion.arguments[2]: reader.streams.labels
})
trainer.train_minibatch(data)
t += data[criterion.arguments[1]].num_samples
trainer.summarize_training_progress()
错误是这样的:
Different minibatch layouts detected (difference in sequence lengths or count or start flags) in data specified for the Function's arguments 'Input('i2', [#, *], [132033])' vs. 'Input('i1', [#, *], [132033])', though these arguments have the same dynamic axes '[*, #]'
我注意到,如果我 select 两个输入序列长度相同的示例,那么训练函数就可以工作。不幸的是,这代表了非常少量的数据。处理具有不同数据长度的序列的正确机制是什么?我是否需要填充输入(类似于 Keras 的 pad_sequence())?
i1
和 i2
这两个序列被意外地视为具有相同的长度。这是因为 sequence.input(...)
的 sequence_axis
参数的默认值为 default_dynamic_axis()
。解决此问题的一种方法是通过为每个序列提供唯一的序列轴来告诉 CNTK 这两个序列的长度不同:
i1_axis = C.Axis.new_unique_dynamic_axis('1')
i2_axis = C.Axis.new_unique_dynamic_axis('2')
i1 = sequence.input(shape=num_tokens, is_sparse=True, sequence_axis=i1_axis, name='i1')
i2 = sequence.input(shape=num_tokens, is_sparse=True, sequence_axis=i2_axis, name='i2')
我正在尝试使用 CNTK 训练一个模型,该模型接受两个输入序列并输出一个二维标量标签。我这样定义模型:
def create_seq_model(num_tokens):
with C.default_options(init=C.glorot_uniform()):
i1 = sequence.input(shape=num_tokens, is_sparse=True, name='i1')
i2 = sequence.input(shape=num_tokens, is_sparse=True, name='i2')
s1 = Sequential([Embedding(300), Fold(GRU(64))])(i1)
s2 = Sequential([Embedding(300), Fold(GRU(64))])(i2)
combined = splice(s1, s2)
model = Sequential([Dense(64, activation=sigmoid),
Dropout(0.1, seed=42),
Dense(2, activation=softmax)])
return model(combined)
我已经将我的数据转换为 CTF 格式。当我尝试使用以下代码片段(根据示例 here 进行了非常轻微的修改)对其进行训练时,出现错误:
def train(reader, model, max_epochs=16):
criterion = create_criterion_function(model)
criterion.replace_placeholders({criterion.placeholders[0]: C.input(2, name='labels')})
epoch_size = 500000
minibatch_size=128
lr_per_sample = [0.003]*4+[0.0015]*24+[0.0003]
lr_per_minibatch= [x*minibatch_size for x in lr_per_sample]
lr_schedule = learning_rate_schedule(lr_per_minibatch, UnitType.minibatch, epoch_size)
momentum_as_time_constant = momentum_as_time_constant_schedule(700)
learner = fsadagrad(criterion.parameters,
lr=lr_schedule, momentum=momentum_as_time_constant,
gradient_clipping_threshold_per_sample=15,
gradient_clipping_with_truncation=True)
progress_printer = ProgressPrinter(freq=1000, first=10, tag='Training', num_epochs=max_epochs)
trainer = Trainer(model, criterion, learner, progress_printer)
log_number_of_parameters(model)
t = 0
for epoch in range(max_epochs):
epoch_end = (epoch+1) * epoch_size
while(t < epoch_end):
data = reader.next_minibatch(minibatch_size, input_map={
criterion.arguments[0]: reader.streams.i1,
criterion.arguments[1]: reader.streams.i2,
criterion.arguments[2]: reader.streams.labels
})
trainer.train_minibatch(data)
t += data[criterion.arguments[1]].num_samples
trainer.summarize_training_progress()
错误是这样的:
Different minibatch layouts detected (difference in sequence lengths or count or start flags) in data specified for the Function's arguments 'Input('i2', [#, *], [132033])' vs. 'Input('i1', [#, *], [132033])', though these arguments have the same dynamic axes '[*, #]'
我注意到,如果我 select 两个输入序列长度相同的示例,那么训练函数就可以工作。不幸的是,这代表了非常少量的数据。处理具有不同数据长度的序列的正确机制是什么?我是否需要填充输入(类似于 Keras 的 pad_sequence())?
i1
和 i2
这两个序列被意外地视为具有相同的长度。这是因为 sequence.input(...)
的 sequence_axis
参数的默认值为 default_dynamic_axis()
。解决此问题的一种方法是通过为每个序列提供唯一的序列轴来告诉 CNTK 这两个序列的长度不同:
i1_axis = C.Axis.new_unique_dynamic_axis('1')
i2_axis = C.Axis.new_unique_dynamic_axis('2')
i1 = sequence.input(shape=num_tokens, is_sparse=True, sequence_axis=i1_axis, name='i1')
i2 = sequence.input(shape=num_tokens, is_sparse=True, sequence_axis=i2_axis, name='i2')