Spacy.io 实体链接器 "not enough values to unpack (expected 2, got 0)"
Spacy.io Entity Linker "not enough values to unpack (expected 2, got 0)"
我一直在尝试使用 Spacy.io 发布的维基百科实体链接器 here。
当 运行 "wikidata_train_entity_linker.py" 我在第 3 个 epoch 得到以下错误。
我需要帮助来理解为什么会出现以下错误。我用谷歌搜索,only mention of a similar problem 没有包含解决方案。
2020-09-03 17:54:31,725 - INFO - entity_linker_evaluation - Counts: {'EVENT': 2409, 'GPE': 16137, 'NORP': 2601, 'ORG': 12739, 'PERSON': 23443}
2020-09-03 17:54:31,725 - INFO - entity_linker_evaluation - Random: F-score = 0.331 | Recall = 0.199 | Precision = 0.983 | F-score by label = {'EVENT': 0.9166104742638795, 'GPE': 0.5135877024430415, 'NORP': 0.2743334404111789, 'ORG': 0.2596817157297999, 'PERSON': 0.11490371085112372}
2020-09-03 17:54:31,725 - INFO - entity_linker_evaluation - Prior: F-score = 0.331 | Recall = 0.199 | Precision = 0.983 | F-score by label = {'EVENT': 0.9166104742638795, 'GPE': 0.5135877024430415, 'NORP': 0.2743334404111789, 'ORG': 0.2596817157297999, 'PERSON': 0.11490371085112372}
2020-09-03 17:54:31,725 - INFO - entity_linker_evaluation - Oracle: F-score = 0.332 | Recall = 0.199 | Precision = 1.0 | F-score by label = {'EVENT': 0.91681654676259, 'GPE': 0.5161379310344828, 'NORP': 0.2820343461030383, 'ORG': 0.2596994535519126, 'PERSON': 0.11490833065294308}
Traceback (most recent call last):
File "wikidata_train_entity_linker.py", line 226, in <module>
plac.call(main)
File "/Users/eliranboraks/opt/anaconda3/envs/spacy/lib/python3.6/site-packages/plac_core.py", line 328, in call
cmd, result = parser.consume(arglist)
File "/Users/eliranboraks/opt/anaconda3/envs/spacy/lib/python3.6/site-packages/plac_core.py", line 207, in consume
return cmd, self.func(*(args + varargs + extraopts), **kwargs)
File "wikidata_train_entity_linker.py", line 172, in main
docs, golds = zip(*train_batch)
ValueError: not enough values to unpack (expected 2, got 0)
我使用的命令是python3 wikidata_train_entity_linker.py -o output_lt_2m_model -l "FAC,LOC,PRODUCT,WORK_OF_ART,LAW,LANGUAGE,DATE,TIME,PERCENT,MONEY,QUANTITY,ORDINAL,CARDINAL" -t 500000 -d 50000 output_lt_2m
知识库目录创建成功
2020-09-03 12:13:02,283 - INFO - train_descriptions - Trained entity descriptions on 2155 (non-unique) descriptions across 5 epochs
2020-09-03 12:13:02,283 - INFO - train_descriptions - Final loss: 0.8585907478066995
2020-09-03 12:13:02,283 - INFO - kb_creator - Getting entity embeddings
2020-09-03 12:13:02,535 - INFO - train_descriptions - Encoded: 431 entities
2020-09-03 12:13:02,535 - INFO - kb_creator - Adding 431 entities
2020-09-03 12:13:02,544 - INFO - kb_creator - Adding aliases from Wikipedia and Wikidata
2020-09-03 12:13:02,544 - INFO - kb_creator - Adding WP aliases
2020-09-03 12:13:02,651 - INFO - __main__ - kb entities: 431
2020-09-03 12:13:02,651 - INFO - __main__ - kb aliases: 326
2020-09-03 12:13:05,640 - INFO - __main__ - Done!
环境:
MacOS 卡特琳娜
Python 3.6
Spacy.io2.3.2
平台:Darwin-19.5.0-x86_64-i386-64bit
此错误表示对于某个训练批次,算法无法找到合适的黄金链接进行训练。恐怕您必须深入研究代码和数据才能了解发生了什么。看起来您的 KB 相对较小。如果您在管道中有一个 NER 命中来自该 KB 的条目,那么您只能拥有黄金链接。如果那没有发生,EL 算法就没有任何数据可以使用,并抛出这个(不幸的是非常难看)错误。
你可以尝试移动 this line
docs, golds = zip(*train_batch)
到它的正下方,在 try
块内。然后应该记录错误,但希望训练能够继续。这将向您显示问题是出在那个训练批次中,还是更普遍。
我一直在尝试使用 Spacy.io 发布的维基百科实体链接器 here。
当 运行 "wikidata_train_entity_linker.py" 我在第 3 个 epoch 得到以下错误。
我需要帮助来理解为什么会出现以下错误。我用谷歌搜索,only mention of a similar problem 没有包含解决方案。
2020-09-03 17:54:31,725 - INFO - entity_linker_evaluation - Counts: {'EVENT': 2409, 'GPE': 16137, 'NORP': 2601, 'ORG': 12739, 'PERSON': 23443}
2020-09-03 17:54:31,725 - INFO - entity_linker_evaluation - Random: F-score = 0.331 | Recall = 0.199 | Precision = 0.983 | F-score by label = {'EVENT': 0.9166104742638795, 'GPE': 0.5135877024430415, 'NORP': 0.2743334404111789, 'ORG': 0.2596817157297999, 'PERSON': 0.11490371085112372}
2020-09-03 17:54:31,725 - INFO - entity_linker_evaluation - Prior: F-score = 0.331 | Recall = 0.199 | Precision = 0.983 | F-score by label = {'EVENT': 0.9166104742638795, 'GPE': 0.5135877024430415, 'NORP': 0.2743334404111789, 'ORG': 0.2596817157297999, 'PERSON': 0.11490371085112372}
2020-09-03 17:54:31,725 - INFO - entity_linker_evaluation - Oracle: F-score = 0.332 | Recall = 0.199 | Precision = 1.0 | F-score by label = {'EVENT': 0.91681654676259, 'GPE': 0.5161379310344828, 'NORP': 0.2820343461030383, 'ORG': 0.2596994535519126, 'PERSON': 0.11490833065294308}
Traceback (most recent call last):
File "wikidata_train_entity_linker.py", line 226, in <module>
plac.call(main)
File "/Users/eliranboraks/opt/anaconda3/envs/spacy/lib/python3.6/site-packages/plac_core.py", line 328, in call
cmd, result = parser.consume(arglist)
File "/Users/eliranboraks/opt/anaconda3/envs/spacy/lib/python3.6/site-packages/plac_core.py", line 207, in consume
return cmd, self.func(*(args + varargs + extraopts), **kwargs)
File "wikidata_train_entity_linker.py", line 172, in main
docs, golds = zip(*train_batch)
ValueError: not enough values to unpack (expected 2, got 0)
我使用的命令是python3 wikidata_train_entity_linker.py -o output_lt_2m_model -l "FAC,LOC,PRODUCT,WORK_OF_ART,LAW,LANGUAGE,DATE,TIME,PERCENT,MONEY,QUANTITY,ORDINAL,CARDINAL" -t 500000 -d 50000 output_lt_2m
知识库目录创建成功
2020-09-03 12:13:02,283 - INFO - train_descriptions - Trained entity descriptions on 2155 (non-unique) descriptions across 5 epochs
2020-09-03 12:13:02,283 - INFO - train_descriptions - Final loss: 0.8585907478066995
2020-09-03 12:13:02,283 - INFO - kb_creator - Getting entity embeddings
2020-09-03 12:13:02,535 - INFO - train_descriptions - Encoded: 431 entities
2020-09-03 12:13:02,535 - INFO - kb_creator - Adding 431 entities
2020-09-03 12:13:02,544 - INFO - kb_creator - Adding aliases from Wikipedia and Wikidata
2020-09-03 12:13:02,544 - INFO - kb_creator - Adding WP aliases
2020-09-03 12:13:02,651 - INFO - __main__ - kb entities: 431
2020-09-03 12:13:02,651 - INFO - __main__ - kb aliases: 326
2020-09-03 12:13:05,640 - INFO - __main__ - Done!
环境: MacOS 卡特琳娜 Python 3.6 Spacy.io2.3.2 平台:Darwin-19.5.0-x86_64-i386-64bit
此错误表示对于某个训练批次,算法无法找到合适的黄金链接进行训练。恐怕您必须深入研究代码和数据才能了解发生了什么。看起来您的 KB 相对较小。如果您在管道中有一个 NER 命中来自该 KB 的条目,那么您只能拥有黄金链接。如果那没有发生,EL 算法就没有任何数据可以使用,并抛出这个(不幸的是非常难看)错误。
你可以尝试移动 this line
docs, golds = zip(*train_batch)
到它的正下方,在 try
块内。然后应该记录错误,但希望训练能够继续。这将向您显示问题是出在那个训练批次中,还是更普遍。