如何使用 gensim 在维基百科页面上训练 Word2Vec 模型?
How to train Word2Vec model on Wikipedia page using gensim?
看完this article,我开始训练自己的模型。问题是作者没有说清楚Word2Vec
中的sentences
应该是什么样子。
我从维基百科页面下载文本,因为它写的是文章,我从中列出了一个句子列表:
sentences = [word for word in wikipage.content.split('.')]
因此,例如,sentences[0]
看起来像:
'Machine learning is the subfield of computer science that gives computers the ability to learn without being explicitly programmed'
然后我尝试用这个列表训练模型:
model = Word2Vec(sentences, min_count=2, size=50, window=10, workers=4)
但是模型的字典是由字母组成的!比如model.wv.vocab.keys()
的输出是:
dict_keys([',', 'q', 'D', 'B', 'p', 't', 'o', '(', ')', '0', 'V', ':', 'j', 's', 'R', '{', 'g', '-', 'y', 'c', '9', 'I', '}', '1', 'M', ';', '`', '\n', 'i', 'r', 'a', 'm', '–', 'v', 'N', 'h', '/', 'P', 'F', '8', '"', '’', 'W', 'T', 'u', 'U', '?', ' ', 'n', '2', '=', 'w', 'C', 'O', '6', '&', 'd', '4', 'S', 'J', 'E', 'b', 'L', '$', 'l', 'e', 'H', '≈', 'f', 'A', "'", 'x', '\', 'K', 'G', '3', '%', 'k', 'z'])
我做错了什么?提前致谢!
Word2Vec
模型对象的输入可以是单词列表,使用 nltk
中的标记化函数:
>>> import wikipedia
>>> from nltk import sent_tokenize, word_tokenize
>>> page = wikipedia.page('machine learning')
>>> sentences = [word_tokenize(sent) for sent in sent_tokenize(page.content)]
>>> sentences[0]
['Machine', 'learning', 'is', 'the', 'subfield', 'of', 'computer', 'science', 'that', 'gives', 'computers', 'the', 'ability', 'to', 'learn', 'without', 'being', 'explicitly', 'programmed', '.']
并输入:
>>> from gensim.models import Word2Vec
>>> model = Word2Vec(sentences, min_count=2, size=50, window=10,
>>> list(model.wv.vocab.keys())[:10]
['sparsely', '(', 'methods', 'their', 'typically', 'information', 'assessment', 'False', 'often', 'problems']
但一般来说,包含(单词)生成器的(句子)生成器也可以工作,即:
>>> from gensim.utils import tokenize
>>> paragraphs = map(tokenize, page.content.split('\n')) # paragraphs
>>> model = Word2Vec(paragraphs, min_count=2, size=50, window=10, workers=4)
>>> list(model.wv.vocab.keys())[:10]
['sparsely', 'methods', 'their', 'typically', 'information', 'assessment', 'False', 'often', 'problems', 'symptoms']
看完this article,我开始训练自己的模型。问题是作者没有说清楚Word2Vec
中的sentences
应该是什么样子。
我从维基百科页面下载文本,因为它写的是文章,我从中列出了一个句子列表:
sentences = [word for word in wikipage.content.split('.')]
因此,例如,sentences[0]
看起来像:
'Machine learning is the subfield of computer science that gives computers the ability to learn without being explicitly programmed'
然后我尝试用这个列表训练模型:
model = Word2Vec(sentences, min_count=2, size=50, window=10, workers=4)
但是模型的字典是由字母组成的!比如model.wv.vocab.keys()
的输出是:
dict_keys([',', 'q', 'D', 'B', 'p', 't', 'o', '(', ')', '0', 'V', ':', 'j', 's', 'R', '{', 'g', '-', 'y', 'c', '9', 'I', '}', '1', 'M', ';', '`', '\n', 'i', 'r', 'a', 'm', '–', 'v', 'N', 'h', '/', 'P', 'F', '8', '"', '’', 'W', 'T', 'u', 'U', '?', ' ', 'n', '2', '=', 'w', 'C', 'O', '6', '&', 'd', '4', 'S', 'J', 'E', 'b', 'L', '$', 'l', 'e', 'H', '≈', 'f', 'A', "'", 'x', '\', 'K', 'G', '3', '%', 'k', 'z'])
我做错了什么?提前致谢!
Word2Vec
模型对象的输入可以是单词列表,使用 nltk
中的标记化函数:
>>> import wikipedia
>>> from nltk import sent_tokenize, word_tokenize
>>> page = wikipedia.page('machine learning')
>>> sentences = [word_tokenize(sent) for sent in sent_tokenize(page.content)]
>>> sentences[0]
['Machine', 'learning', 'is', 'the', 'subfield', 'of', 'computer', 'science', 'that', 'gives', 'computers', 'the', 'ability', 'to', 'learn', 'without', 'being', 'explicitly', 'programmed', '.']
并输入:
>>> from gensim.models import Word2Vec
>>> model = Word2Vec(sentences, min_count=2, size=50, window=10,
>>> list(model.wv.vocab.keys())[:10]
['sparsely', '(', 'methods', 'their', 'typically', 'information', 'assessment', 'False', 'often', 'problems']
但一般来说,包含(单词)生成器的(句子)生成器也可以工作,即:
>>> from gensim.utils import tokenize
>>> paragraphs = map(tokenize, page.content.split('\n')) # paragraphs
>>> model = Word2Vec(paragraphs, min_count=2, size=50, window=10, workers=4)
>>> list(model.wv.vocab.keys())[:10]
['sparsely', 'methods', 'their', 'typically', 'information', 'assessment', 'False', 'often', 'problems', 'symptoms']