由于 tf.keras.preprocessing.text.Tokenizer.texts_to_sequences 上的 np.hstack,尺寸(形状)发生了变化

Change in the Dimension (shape) because of np.hstack on tf.keras.preprocessing.text.Tokenizer.texts_to_sequences

我已经在 tensorflow.keras.preprocessing.text.Tokenizer.texts_to_sequences 上应用了 np.hstack 作为训练标签和验证(测试)标签.

令人惊讶和神秘的是,在我应用训练标签之后输出的大小与我应用之前不同 np.hstack。但是,在应用 tensorflow.keras.preprocessing.text.Tokenizer.texts_to_sequencesnp.hstack 之前和之后,验证标签的形状没有变化.

这是 Google Colab 的 Link,可以轻松重现错误。

下面给出了重现错误的完整代码(以防 link 不起作用):

!pip install tensorflow==2.1

# For Preprocessing the Text => To Tokenize the Text
from tensorflow.keras.preprocessing.text import Tokenizer
# If the Two Articles are of different length, pad_sequences will make the length equal
from tensorflow.keras.preprocessing.sequence import pad_sequences

# Package for performing Numerical Operations
import numpy as np

Unique_Labels_List = ['India', 'USA', 'Australia', 'Germany', 'Bhutan', 'Nepal', 'New Zealand', 'Israel', 'Canada', 'France', 'Ireland', 'Poland', 'Egypt', 'Greece', 'China', 'Spain', 'Mexico']


Train_Labels = Unique_Labels_List[0:14]
#print('Train Labels = {}'.format(Train_Labels))

Val_Labels =  Unique_Labels_List[14:]
#print('Val_Labels = {}'.format(Val_Labels))

No_Of_Train_Items = [248, 200, 200, 218, 248, 248, 249, 247, 220, 200, 200, 211, 224, 209]
No_Val_Items = [212, 200, 219]

T_L = []
for Each_Label, Item in zip(Train_Labels, No_Of_Train_Items):
    T_L.append([Each_Label] * Item)

T_L = [item for sublist in T_L for item in sublist]

V_L = []
for Each_Label, Item in zip(Val_Labels, No_Val_Items):
    V_L.append([Each_Label] * Item)

V_L = [item for sublist in V_L for item in sublist]


len(T_L)

len(V_L)

label_tokenizer = Tokenizer()

label_tokenizer.fit_on_texts(Unique_Labels_List)

# Since it should be a Numpy Array, we should Convert the Sequences to Numpy Array, for both Training and 
# Test Labels

training_label_list = label_tokenizer.texts_to_sequences(T_L)

validation_label_list = label_tokenizer.texts_to_sequences(V_L)

training_label_seq = np.hstack(training_label_list)

validation_label_seq = np.hstack(validation_label_list)

print('Actual Number of Train Labels before np.hstack are {}'.format(len(training_label_list)))
print('Change in the Number of Train Labels because of np.hstack are {}'.format(len(training_label_seq)))

print('-------------------------------------------------------------------------------------------------------')

print('Actual Number of Validation Labels before np.hstack are {}'.format(len(validation_label_list)))
print('However, there is no change in the Number of Validation Labels because of np.hstack {}'.format(len(validation_label_seq)))

提前致谢。

这是因为您在 training_label_list 中有包含多个值的列表。你可以通过sorted(training_label_list, key=lambda x: len(x), reverse = True).

来验证

这是因为 label_tokenizer 以下面的方式考虑 New Zealand

>>>label_tokenizer.index_word
{1: 'india',
 2: 'usa',
 3: 'australia',
 4: 'germany',
 5: 'bhutan',
 6: 'nepal',
 7: 'new',
 8: 'zealand',
 9: 'israel',
 10: 'canada',
 11: 'france',
 12: 'ireland',
 13: 'poland',
 14: 'egypt',
 15: 'greece',
 16: 'china',
 17: 'spain',
 18: 'mexico'}

检查索引 7 和 8。