如何使用 scikit-learn 对文本对进行分类?
How to classify text pairs using scikit-learn?
我已经阅读了许多关于这个主题的不同博客,但一直未能找到明确的解决方案。我有以下情况:
- 我有一个标签为 1 或 -1 的文本对列表。
- 对于每个文本对,我希望特征按以下方式串联:f () = tfidf(t1) "concat" tfidf(t2)
关于如何做同样的事情有什么建议吗?我有以下代码,但它给出了一个错误:
count_vect = TfidfVectorizer(analyzer=u'char', ngram_range=ngram_range)
X0_train_counts = count_vect.fit_transform([x[0] for x in training_documents])
X1_train_counts = count_vect.fit_transform([x[1] for x in training_documents])
combined_features = FeatureUnion([("x0", X0_train_counts), ("x1", X1_train_counts)])
clf = LinearSVC().fit(combined_features, training_target)
average_training_accuracy += clf.score(combined_features, training_target)
这是我得到的错误:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
scoreEdgesUsingClassifier(None, pos, neg, 1,ngram_range=(2,5), max_size=1000000, test_size=100000)
scoreEdgesUsingClassifier(unc, pos, neg, number_of_iterations, ngram_range, max_size, test_size)
X0_train_counts = count_vect.fit_transform([x[0] for x in training_documents])
X1_train_counts = count_vect.fit_transform([x[1] for x in training_documents])
combined_features = FeatureUnion([("x0", X0_train_counts), ("x1", X1_train_counts)])
print "Done transforming, now training classifier"
lib/python2.7/site-packages/sklearn/pipeline.pyc in __init__(self, transformer_list, n_jobs, transformer_weights)
616 self.n_jobs = n_jobs
617 self.transformer_weights = transformer_weights
--> 618 self._validate_transformers()
619
620 def get_params(self, deep=True):
lib/python2.7/site-packages/sklearn/pipeline.pyc in _validate_transformers(self)
660 raise TypeError("All estimators should implement fit and "
661 "transform. '%s' (type %s) doesn't" %
--> 662 (t, type(t)))
663
664 def _iter(self):
TypeError: All estimators should implement fit and transform. ' (0, 49025) 0.0575144797079
(254741, 38401) 0.184394443164
(254741, 201747) 0.186080393768
(254741, 179231) 0.195062580945
(254741, 156925) 0.211367771299
(254741, 90026) 0.202458920022' (type <class 'scipy.sparse.csr.csr_matrix'>) doesn't
更新
解决方法如下:
count_vect = TfidfVectorizer(analyzer=u'char', ngram_range=ngram_range)
training_docs_combined = [x[0] for x in training_documents] + [x[1] for x in training_documents]
X_train_counts = count_vect.fit_transform(training_docs_combined)
concat_features = hstack((X_train_counts[0:len(training_docs_combined) / 2 ], X_train_counts[len (training_docs_combined) / 2:]))
clf = LinearSVC().fit(concat_features, training_target)
average_training_accuracy += clf.score(concat_features, training_target)
来自 scikit-learn 的 FeatureUnion
作为输入估计器,而不是数据数组。
您可以将生成的 X0_train_counts
、X1_train_counts
数组简单地与 scipy.sparse.hstack
连接起来,或者创建 TfidfVectorizer
的两个独立实例,将 FeatureUnion
应用到它们,然后调用 fit_transform
方法。
我已经阅读了许多关于这个主题的不同博客,但一直未能找到明确的解决方案。我有以下情况:
- 我有一个标签为 1 或 -1 的文本对列表。
- 对于每个文本对,我希望特征按以下方式串联:f () = tfidf(t1) "concat" tfidf(t2)
关于如何做同样的事情有什么建议吗?我有以下代码,但它给出了一个错误:
count_vect = TfidfVectorizer(analyzer=u'char', ngram_range=ngram_range)
X0_train_counts = count_vect.fit_transform([x[0] for x in training_documents])
X1_train_counts = count_vect.fit_transform([x[1] for x in training_documents])
combined_features = FeatureUnion([("x0", X0_train_counts), ("x1", X1_train_counts)])
clf = LinearSVC().fit(combined_features, training_target)
average_training_accuracy += clf.score(combined_features, training_target)
这是我得到的错误:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
scoreEdgesUsingClassifier(None, pos, neg, 1,ngram_range=(2,5), max_size=1000000, test_size=100000)
scoreEdgesUsingClassifier(unc, pos, neg, number_of_iterations, ngram_range, max_size, test_size)
X0_train_counts = count_vect.fit_transform([x[0] for x in training_documents])
X1_train_counts = count_vect.fit_transform([x[1] for x in training_documents])
combined_features = FeatureUnion([("x0", X0_train_counts), ("x1", X1_train_counts)])
print "Done transforming, now training classifier"
lib/python2.7/site-packages/sklearn/pipeline.pyc in __init__(self, transformer_list, n_jobs, transformer_weights)
616 self.n_jobs = n_jobs
617 self.transformer_weights = transformer_weights
--> 618 self._validate_transformers()
619
620 def get_params(self, deep=True):
lib/python2.7/site-packages/sklearn/pipeline.pyc in _validate_transformers(self)
660 raise TypeError("All estimators should implement fit and "
661 "transform. '%s' (type %s) doesn't" %
--> 662 (t, type(t)))
663
664 def _iter(self):
TypeError: All estimators should implement fit and transform. ' (0, 49025) 0.0575144797079
(254741, 38401) 0.184394443164
(254741, 201747) 0.186080393768
(254741, 179231) 0.195062580945
(254741, 156925) 0.211367771299
(254741, 90026) 0.202458920022' (type <class 'scipy.sparse.csr.csr_matrix'>) doesn't
更新
解决方法如下:
count_vect = TfidfVectorizer(analyzer=u'char', ngram_range=ngram_range)
training_docs_combined = [x[0] for x in training_documents] + [x[1] for x in training_documents]
X_train_counts = count_vect.fit_transform(training_docs_combined)
concat_features = hstack((X_train_counts[0:len(training_docs_combined) / 2 ], X_train_counts[len (training_docs_combined) / 2:]))
clf = LinearSVC().fit(concat_features, training_target)
average_training_accuracy += clf.score(concat_features, training_target)
FeatureUnion
作为输入估计器,而不是数据数组。
您可以将生成的 X0_train_counts
、X1_train_counts
数组简单地与 scipy.sparse.hstack
连接起来,或者创建 TfidfVectorizer
的两个独立实例,将 FeatureUnion
应用到它们,然后调用 fit_transform
方法。