在 word2vec Gensim 中获取二元组和三元组

Get bigrams and trigrams in word2vec Gensim

我目前在我的 word2vec 模型中使用 uni-gram,如下所示。

def review_to_sentences( review, tokenizer, remove_stopwords=False ):
    #Returns a list of sentences, where each sentence is a list of words
    #
    #NLTK tokenizer to split the paragraph into sentences
    raw_sentences = tokenizer.tokenize(review.strip())

    sentences = []
    for raw_sentence in raw_sentences:
        # If a sentence is empty, skip it
        if len(raw_sentence) > 0:
            # Otherwise, call review_to_wordlist to get a list of words
            sentences.append( review_to_wordlist( raw_sentence, \
              remove_stopwords ))
    #
    # Return the list of sentences (each sentence is a list of words,
    # so this returns a list of lists
    return sentences

但是,我会在我的数据集中遗漏重要的二元组和三元组。

E.g.,
"team work" -> I am currently getting it as "team", "work"
"New York" -> I am currently getting it as "New", "York"

因此,我想在我的数据集中捕获重要的二元组、三元组等,并将其输入到我的 word2vec 模型中。

我是 wordvec 的新手,正在为如何去做而苦苦挣扎。请帮助我。

首先,您应该使用 gensim 的 class Phrases 来获得双字母组,其工作原理与文档

中指出的一样
>>> bigram = Phraser(phrases)
>>> sent = [u'the', u'mayor', u'of', u'new', u'york', u'was', u'there']
>>> print(bigram[sent])
[u'the', u'mayor', u'of', u'new_york', u'was', u'there']

要获得三元组等,您应该使用已有的二元组模型并再次对其应用短语,依此类推。 示例:

trigram_model = Phrases(bigram_sentences)

还有一个很好的笔记本和视频,解释了如何使用它.... the notebook, the video

其中最重要的部分是如何在现实生活中使用它,如下所示:

// to create the bigrams
bigram_model = Phrases(unigram_sentences)

// apply the trained model to a sentence
 for unigram_sentence in unigram_sentences:                
            bigram_sentence = u' '.join(bigram_model[unigram_sentence])

// get a trigram model out of the bigram
trigram_model = Phrases(bigram_sentences)

希望这对您有所帮助,但下次请向我们提供有关您使用的内容等的更多信息。

P.S:现在你编辑了它,你并没有做任何事情来获得二元组只是分裂它,你必须使用短语来获得像纽约这样的词作为二元组。

from gensim.models import Phrases

from gensim.models.phrases import Phraser

documents = 
["the mayor of new york was there", "machine learning can be useful sometimes","new york mayor was present"]

sentence_stream = [doc.split(" ") for doc in documents]
print(sentence_stream)

bigram = Phrases(sentence_stream, min_count=1, threshold=2, delimiter=b' ')

bigram_phraser = Phraser(bigram)


print(bigram_phraser)

for sent in sentence_stream:
    tokens_ = bigram_phraser[sent]

    print(tokens_)

短语和短语是您应该寻找的

bigram = gensim.models.Phrases(data_words, min_count=1, threshold=10) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[data_words], threshold=100) 

一旦你完成了添加词汇的工作,就可以使用 Phraser 来更快地访问和高效地使用内存。不是强制性的,但很有用。

bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)

谢谢,