InvalidArgumentError: indices[127,7] = 43 is not in [0, 43) in Keras R

InvalidArgumentError: indices[127,7] = 43 is not in [0, 43) in Keras R

问题是相关的 至:InvalidArgumentError (see above for traceback): indices[1] = 10 is not in [0, 10) 我需要它用于 R,因此需要另一个解决方案,而不是上面 link 中给出的解决方案。

maxlen <- 40
chars <- c("'",  "-",  " ",  "!",  "\"", "(",  ")",  ",",  ".",  ":",  ";",  "?",  "[",  "]",  "_",  "=",  "0", "a",  "b",  "c",  "d",  "e", "f",  "g",  "h",  "i",  "j",  "k",  "l",  "m",  "n",  "o",  "p",  "q",  "r",  "s",  "t",  "u",  "v",  "w",  "x",  "y",  "z")



tokenizer <- text_tokenizer(char_level = T, filters = NULL)

tokenizer %>% fit_text_tokenizer(chars)
unlist(tokenizer$word_index)

输出为:

 '  -     !  "  (  )  ,  .  :  ;  ?  [  ]  _  =  0  a  b  c  d  e  f  g  h  i  j  k  l  m  n  o  p  q  r  s  t  u  v  w  x  y  z 
 1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 

如何更改索引,使其在 text_tokenizer 中从 0 而不是 1 开始?

我在运行 fit()之后得到的错误如下:

InvalidArgumentError: indices[127,7] = 43 is not in [0, 43)
     [[Node: embedding_3/embedding_lookup = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@training_1/RMSprop/Assign_1"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_3/embeddings/read, embedding_3/Cast, training_1/RMSprop/gradients/embedding_3/embedding_lookup_grad/concat/axis)]]

但我相信更改索引会解决我的问题。

索引 0 通常保留用于 padding,因此从 0 开始您的实际字符索引也不是一个明智的主意。相反,您应该冒险进入 Embedding 层并按照 documentation:

的建议将输入大小加 1

input_dim: int > 0. Size of the vocabulary, i.e. maximum integer index + 1.

在你的情况下,这将是 43 + 1 = 44。

您需要使用词汇表大小初始化 Embedding 层。例如:

model.add(Embedding(875, 64))

在这种情况下,875 是我的词汇量。