Python nltk 使用自定义缩写的不正确句子标记化

Python nltk incorrect sentence tokenization with custom abbrevations

我正在使用 nltk tokenize 库来拆分英文句子。 许多句子包含 e.g.eg. 等缩写,因此我用这些自定义缩写更新了分词器。 不过,我发现了一个奇怪的标记化行为,其中有一句话:

import nltk

nltk.download("punkt")
sentence_tokenizer = nltk.data.load("tokenizers/punkt/english.pickle")

extra_abbreviations = ['e.g', 'eg']
sentence_tokenizer._params.abbrev_types.update(extra_abbreviations)

line = 'Required experience with client frameworks (e.g. React, Vue.js) and testing (e.g. Karma, Tape)'

for s in sentence_tokenizer.tokenize(line):
    print(s)

# OUTPUT
# Required experience with client frameworks (e.g. React, Vue.js) and testing (e.g.
# Karma, Tape)

正如您所见,分词器不会在第一个缩写(正确)上拆分,但在第二个缩写上拆分(不正确)。

奇怪的是,如果我在其他任何地方更改 Karma 一词,它都能正常工作。

import nltk

nltk.download("punkt")
sentence_tokenizer = nltk.data.load("tokenizers/punkt/english.pickle")

extra_abbreviations = ['e.g', 'eg']
sentence_tokenizer._params.abbrev_types.update(extra_abbreviations)

line = 'Required experience with client frameworks (e.g. React, Vue.js) and testing (e.g. SomethingElse, Tape)'

for s in sentence_tokenizer.tokenize(line):
    print(s)

# OUTPUT
# Required experience with client frameworks (e.g. React, Vue.js) and testing (e.g. SomethingElse, Tape)

知道为什么会这样吗?

您可以看到为什么 punkt 使用 debug_decisions 方法做出中断选择。

>>> for d in sentence_tokenizer.debug_decisions(line):
...     print(nltk.tokenize.punkt.format_debug_decision(d))
... 
Text: '(e.g. React,' (at offset 47)
Sentence break? None (default decision)
Collocation? False
'e.g.':
    known abbreviation: True
    is initial: False
'react':
    known sentence starter: False
    orthographic heuristic suggests is a sentence starter? unknown
    orthographic contexts in training: {'MID-UC', 'MID-LC'}

Text: '(e.g. Karma,' (at offset 80)
Sentence break? True (abbreviation + orthographic heuristic)
Collocation? False
'e.g.':
    known abbreviation: True
    is initial: False
'karma':
    known sentence starter: False
    orthographic heuristic suggests is a sentence starter? True
    orthographic contexts in training: {'MID-LC'}

这告诉我们在用于训练的语料库中,'react' 和 'React' 都出现在句子的中间,所以它不会在你的行中 'React' 之前中断。但是,只出现小写形式的 'karma',因此它认为这是一个可能的句子起点。

注意,这与库的文档一致:

However, Punkt is designed to learn parameters (a list of abbreviations, etc.) unsupervised from a corpus similar to the target domain. The pre-packaged models may therefore be unsuitable: use PunktSentenceTokenizer(text) to learn parameters from the given text.

PunktTrainer learns parameters such as a list of abbreviations (without supervision) from portions of text. Using a PunktTrainer directly allows for incremental training and modification of the hyper-parameters used to decide what is considered an abbreviation, etc.

因此,虽然针对这种特殊情况的快速破解正在调整私有 _params 进一步说 'Karma' 也可能出现在句子中间:

>>> sentence_tokenizer._params.ortho_context['karma'] |= nltk.tokenize.punkt._ORTHO_MID_UC
>>> sentence_tokenizer.tokenize(line)
['Required experience with client frameworks (e.g. React, Vue.js) and testing (e.g. Karma, Tape)']

也许您应该从包含所有这些库名称的 CV 添加额外的训练数据:

from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktTrainer
trainer = PunktTrainer()
# tweak trainer params here if helpful
trainer.train(my_corpus_of_concatted_tech_cvs)
sentence_tokenizer = PunktSentenceTokenizer(trainer.get_params())