是否可以更改 Spacy 分词器的分词规则?

Is it possible to change the token split rules for a Spacy tokenizer?

默认情况下,(德语)spacy 分词器不会在斜杠、下划线或星号上拆分,这正是我所需要的(因此 "der/die" 产生单个分词)。

但是它确实在括号中拆分,因此 "dies(und)das" 被拆分为 5 个标记。有没有一种(简单的)方法可以告诉默认的分词器也不要在没有 space 的情况下用两边用字母括起来的括号分割?

分词器定义的括号中的拆分究竟是怎样的?

括号上的拆分是在这一行中定义的,它在两个字母之间的括号上拆分:

https://github.com/explosion/spaCy/blob/23ec07debdd568f09c7c83b10564850f9fa67ad4/spacy/lang/de/punctuation.py#L18

没有删除中缀模式的简单方法,但您可以定义一个自定义分词器来执行您想要的操作。一种方法是从 spacy/lang/de/punctuation.py 复制中缀定义并修改它:

import re
import spacy
from spacy.tokenizer import Tokenizer
from spacy.lang.char_classes import ALPHA, ALPHA_LOWER, ALPHA_UPPER, CONCAT_QUOTES, LIST_ELLIPSES, LIST_ICONS
from spacy.lang.de.punctuation import _quotes
from spacy.util import compile_prefix_regex, compile_infix_regex, compile_suffix_regex

def custom_tokenizer(nlp):
    infixes = (
        LIST_ELLIPSES
        + LIST_ICONS
        + [
            r"(?<=[{al}])\.(?=[{au}])".format(al=ALPHA_LOWER, au=ALPHA_UPPER),
            r"(?<=[{a}])[,!?](?=[{a}])".format(a=ALPHA),
            r'(?<=[{a}])[:<>=](?=[{a}])'.format(a=ALPHA),
            r"(?<=[{a}]),(?=[{a}])".format(a=ALPHA),
            r"(?<=[{a}])([{q}\]\[])(?=[{a}])".format(a=ALPHA, q=_quotes),
            r"(?<=[{a}])--(?=[{a}])".format(a=ALPHA),
            r"(?<=[0-9])-(?=[0-9])",
        ]
    )

    infix_re = compile_infix_regex(infixes)

    return Tokenizer(nlp.vocab, prefix_search=nlp.tokenizer.prefix_search,
                                suffix_search=nlp.tokenizer.suffix_search,
                                infix_finditer=infix_re.finditer,
                                token_match=nlp.tokenizer.token_match,
                                rules=nlp.Defaults.tokenizer_exceptions)


nlp = spacy.load('de')
nlp.tokenizer = custom_tokenizer(nlp)