使用 CNTK 实现 CRNN 的问题

Problems implementing CRNN with CNTK

我对机器学习还很陌生,作为一项学习练习,我正在尝试在 CNTK 中实现卷积递归神经网络以识别图像中的可变长度文本。基本思想是获取 CNN 的输出,从中生成一个序列并将其提供给 RNN,然后使用 CTC 作为损失函数。我遵循了 'CNTK 208: Training Acoustic Model with Connectionist Temporal Classification (CTC) Criteria' 教程,其中显示了 CTC 用法的基础知识。不幸的是,在训练过程中,我的网络收敛到只输出空白标签,没有其他任何东西,因为出于某种原因,损失最小。

我正在为我的网络提供尺寸为 (1, 32, 96) 的图像,并即时生成它们以显示一些随机字母。作为标签,我给它一个热编码字母序列,其中 CTC 在索引 0 处需要空白(这都是 numpy 数组,因为我使用自定义数据加载)。我发现要使 forward_backward() 函数起作用,我需要确保它的两个输入都使用具有相同长度的相同动态轴,我通过使我的标签字符串的长度与网络输出长度相同,并在下面的代码中使用 to_sequence_like() (我不知道如何做得更好,使用的副作用to_sequence_like()这里是我在评估这个模型的时候需要传递虚拟标签数据)。

alphabet = "0123456789abcdefghijklmnopqrstuvwxyz"
input_dim_model = (1, 32, 96)    # images are 96 x 32 with 1 channel of color (gray)
num_output_classes = len(alphabet) + 1
ltsm_hidden = 256

def bidirectionalLTSM(features, nHidden, nOut):
    a = C.layers.Recurrence(C.layers.LSTM(nHidden))(features)
    b = C.layers.Recurrence(C.layers.LSTM(nHidden), go_backwards=True)(features)
    c = C.splice(a, b)
    r = C.layers.Dense(nOut)(c)
    return r

def create_model_rnn(features):
    h = features
    h = bidirectionalLTSM(h, ltsm_hidden, ltsm_hidden)
    h = bidirectionalLTSM(h, ltsm_hidden, num_output_classes)
    return h

def create_model_cnn(features):
    with C.layers.default_options(init=C.glorot_uniform(), activation=C.relu):
        h = features

        h = C.layers.Convolution2D(filter_shape=(3,3), 
                                    num_filters=64, 
                                    strides=(1,1), 
                                    pad=True, name='conv_0')(h)

        #more layers...

        h = C.layers.BatchNormalization(name="batchnorm_6")(h)

        return h

x = C.input_variable(input_dim_model, name="x")
label = C.sequence.input((num_output_classes), name="y")

def create_model(features):
    #Composite(x: Tensor[1,32,96]) -> Tensor[512,1,23]
    a = create_model_cnn(features) 
    a = C.reshape(a, (512, 23))
    #Composite(x: Tensor[1,32,96]) -> Tensor[23,512]
    a = C.swapaxes(a, 0, 1) 

    #is there a better way to convert to sequence and still be compatible with forward_backwards() ?
    #Composite(x: Tensor[1,32,96], y: Sequence[Tensor[37]]) -> Sequence[Tensor[512]]
    a = C.to_sequence_like(a, label) 

    #Composite(x: Tensor[1,32,96], y: Sequence[Tensor[37]]) -> Sequence[Tensor[37]]
    a = create_model_rnn(a) 
    return a

#Composite(x: Tensor[1,32,96], y: Sequence[Tensor[37]]) -> Sequence[Tensor[37]]
z = create_model(x)

#LabelsToGraph(y: Sequence[Tensor[37]]) -> Sequence[Tensor[37]]
graph = C.labels_to_graph(label)

#Composite(y: Sequence[Tensor[37]], x: Tensor[1,32,96]) -> np.float32
criteria = C.forward_backward(C.labels_to_graph(label), z, blankTokenId=0) 

err = C.edit_distance_error(z, label, squashInputs=True, tokensToIgnore=[0])
lr = C.learning_rate_schedule(0.01, C.UnitType.sample)
learner = C.adadelta(z.parameters, lr)

progress_printer = C.logging.progress_print.ProgressPrinter(50, first=10, tag='Training')
trainer = C.Trainer(z, (criteria, err), learner, progress_writers=[progress_printer])

#some more custom code ...
#below is how I'm feeding the data

while True:
    x1, y1 = custom_datareader.next_minibatch()
    #x1 is a list of numpy arrays containing training images
    #y1 is a list of numpy arrays with one hot encoded labels

    trainer.train_minibatch({x: x1, label: y1})

网络收敛很快,虽然不是我想要的(左边是网络输出,右边是我给的标签):

Minibatch[  11-  50]: loss = 3.506087 * 58880, metric = 176.23% * 58880;
lllll--55leym---------- => lllll--55leym----------, gt: aaaaaaaaaaaaaaaaaaaayox
-------bbccaqqqyyyryy-q => -------bbccaqqqyyyryy-q, gt: AAAAAAAAAAAAAAAAAAAJPTA
tt22yye------yqqqtll--- => tt22yye------yqqqtll---, gt: tttttttttttttttttttyliy
ceeeeeeee----eqqqqqqe-q => ceeeeeeee----eqqqqqqe-q, gt: sssssssssssssssssssskht
--tc22222al55a5qqqaa--q => --tc22222al55a5qqqaa--q, gt: cccccccccccccccccccaooa
yyyyyyiqaaacy---------- => yyyyyyiqaaacy----------, gt: cccccccccccccccccccxyty
mcccyya----------y---qq => mcccyya----------y---qq, gt: ppppppppppppppppppptjnj
ylncyyyy--------yy--t-y => ylncyyyy--------yy--t-y, gt: sssssssssssssssssssyusl
tt555555ccc------------ => tt555555ccc------------, gt: jjjjjjjjjjjjjjjjjjjmyss
-------eeeaadaaa------5 => -------eeeaadaaa------5, gt: fffffffffffffffffffciya
eennnnemmtmmy--------qy => eennnnemmtmmy--------qy, gt: tttttttttttttttttttajdn
-rcqqqqaaaacccccycc8--q => -rcqqqqaaaacccccycc8--q, gt: aaaaaaaaaaaaaaaaaaaixvw
------33e-bfaaaaa------ => ------33e-bfaaaaa------, gt: uuuuuuuuuuuuuuuuuuupfyq
r----5t5y5aaaaa-------- => r----5t5y5aaaaa--------, gt: fffffffffffffffffffapap
deeeccccc2qqqm888zl---t => deeeccccc2qqqm888zl---t, gt: hhhhhhhhhhhhhhhhhhhlvjx
 Minibatch[  51- 100]: loss = 1.616731 * 73600, metric = 100.82% * 73600;
----------------------- => -----------------------, gt: kkkkkkkkkkkkkkkkkkkakyw
----------------------- => -----------------------, gt: ooooooooooooooooooopwtm
----------------------- => -----------------------, gt: jjjjjjjjjjjjjjjjjjjqpny
----------------------- => -----------------------, gt: iiiiiiiiiiiiiiiiiiidspr
----------------------- => -----------------------, gt: fffffffffffffffffffatyp
----------------------- => -----------------------, gt: vvvvvvvvvvvvvvvvvvvmccf
----------------------- => -----------------------, gt: dddddddddddddddddddsfyo
----------------------- => -----------------------, gt: yyyyyyyyyyyyyyyyyyylaph
----------------------- => -----------------------, gt: kkkkkkkkkkkkkkkkkkkacay
----------------------- => -----------------------, gt: uuuuuuuuuuuuuuuuuuujuqs
----------------------- => -----------------------, gt: sssssssssssssssssssovjp
----------------------- => -----------------------, gt: vvvvvvvvvvvvvvvvvvvibma
----------------------- => -----------------------, gt: vvvvvvvvvvvvvvvvvvvaajt
----------------------- => -----------------------, gt: tttttttttttttttttttdhfo
----------------------- => -----------------------, gt: yyyyyyyyyyyyyyyyyyycmbh
 Minibatch[ 101- 150]: loss = 0.026177 * 73600, metric = 100.00% * 73600;
----------------------- => -----------------------, gt: iiiiiiiiiiiiiiiiiiiavoo
----------------------- => -----------------------, gt: lllllllllllllllllllaara
----------------------- => -----------------------, gt: pppppppppppppppppppmufu
----------------------- => -----------------------, gt: sssssssssssssssssssaacd
----------------------- => -----------------------, gt: uuuuuuuuuuuuuuuuuuujulx
----------------------- => -----------------------, gt: vvvvvvvvvvvvvvvvvvvoaqy
----------------------- => -----------------------, gt: dddddddddddddddddddvjmr
----------------------- => -----------------------, gt: oooooooooooooooooooxlvl
----------------------- => -----------------------, gt: dddddddddddddddddddqqlo
----------------------- => -----------------------, gt: wwwwwwwwwwwwwwwwwwwwrvx
----------------------- => -----------------------, gt: pppppppppppppppppppxuxi
----------------------- => -----------------------, gt: bbbbbbbbbbbbbbbbbbbkbqv
----------------------- => -----------------------, gt: ppppppppppppppppppplpha
----------------------- => -----------------------, gt: dddddddddddddddddddilol
----------------------- => -----------------------, gt: dddddddddddddddddddqnwf

我的问题是如何让网络学习输出正确的字幕。我想补充一点,我成功地使用相同的技术训练了一个模型,但是是在 pytorch 中制作的,所以图像或标签不太可能是问题所在。另外,有没有更好的方法将卷积层的输出转换为带有动态轴的序列,以便我仍然可以将它与 forward_backward() 函数一起使用?

CNTK 学习器默认使用聚合梯度来适应具有不同小批量大小的分布式训练。然而,聚合梯度对于像 adadelta 这样的 adagrad 风格的学习器来说效果不一样。请尝试 use_mean_gradient=True:

learner = C.adadelta(z.parameters, lr, use_mean_gradient=True)

在 CNTK 中训练 CRNN 模型有很多困难(格式化标签的正确方法很棘手,整个 LabelsToGraph 转换,没有转录错误指标等)。这是正确工作的模型的实现:

https://github.com/BenjaminTrapani/SceneTextOCR/tree/master

它依赖于 CNTK 的一个分支,修复了图像 reader 错误,提供了转录错误功能并提高了文本格式的性能 reader。它还提供了一个应用程序,可以从 mjsynth 数据集生成文本格式标签。作为参考,以下是设置标签格式的方法:

513528 |textLabel 7:2
513528 |textLabel 26:1
513528 |textLabel 0:2
513528 |textLabel 26:1
513528 |textLabel 20:2
513528 |textLabel 26:1
513528 |textLabel 11:2
513528 |textLabel 26:1
513528 |textLabel 8:2
513528 |textLabel 26:1
513528 |textLabel 4:2
513528 |textLabel 26:1
513528 |textLabel 17:2
513528 |textLabel 26:1
513528 |textLabel 18:2
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1
513528 |textLabel 26:1

513528为序列ID,应匹配同一样本对应的图像数据序列ID。 textLabel 用于为小批量源创建流。您在 C++ 中按如下方式创建流:

StreamConfiguration textLabelConfig(L"textLabel", numClasses, true, L"textLabel");

26是CTC解码的空白字符索引。 “:”之前的其他值是标签的字符代码。 1是对序列中的每个向量进行1-hot编码。有一堆尾随空白字符以确保序列与支持的最大序列长度一样长,因为截至撰写本文时,CTC 损失函数实现不支持可变长度序列。