CountVectorizer 忽略 'I'

CountVectorizer ignoring 'I'

为什么 sklearn 中的 CountVectorizer 会忽略代词 "I"?

ngram_vectorizer = CountVectorizer(analyzer = "word", ngram_range = (2,2), min_df = 1)
ngram_vectorizer.fit_transform(['HE GAVE IT TO I'])
<1x3 sparse matrix of type '<class 'numpy.int64'>'
ngram_vectorizer.get_feature_names()
['gave it', 'he gave', 'it to']

默认分词器只考虑 2 个字符(或更多)的单词。

您可以通过将适当的 token_pattern 传递给您的 CountVectorizer 来更改此行为。

默认模式是(参见the signature in the docs):

'token_pattern': u'(?u)\b\w\w+\b'

你可以通过更改默认值得到一个不掉单字词的CountVectorizer,例如:

from sklearn.feature_extraction.text import CountVectorizer
ngram_vectorizer = CountVectorizer(analyzer="word", ngram_range=(2,2), 
                                   token_pattern=u"(?u)\b\w+\b",min_df=1)
ngram_vectorizer.fit_transform(['HE GAVE IT TO I'])
print(ngram_vectorizer.get_feature_names())

给出:

['gave it', 'he gave', 'it to', 'to i']