sklearn.feature_selection.chi2 returns NaN 值列表
sklearn.feature_selection.chi2 returns list of NaN values
我有以下数据集(我只上传4行的样本,真实的有15000行):
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
from nltk.corpus import stopwords
from sklearn.feature_selection import chi2
quotes=["Sip N Shop Come thru right now Marjais PopularNobodies MMR Marjais SipNShop",
"I do not know about you but My family and I will not take the Covid19 vaccine anytime soon",
"MSignorile Immunizations should be mandatory Period In Oklahoma they will not let kids go to school without them It is dangerous otherwise",
"President Obama spoke in favor of vaccination for children Fox will start telling its viewers to choose against vaccination in 321"]
labels=[0,1,2,0]
dummy = pd.DataFrame({"quote": quotes, "label":labels})
而且我想应用著名的卡方检验来消除每个类别 (0,1,2) 的不相关词的数量。其中 0:中立,1:积极,2:消极。
下面是我的方法(类似于实现的方法here)
简而言之,我创建了一个等于语料库长度的 0 的空列表。 0 代表 y = 0 的第一个标签。对于第二个标签(1=正),我将创建一个空列表 1。同样对于第三个标签 (2=negative).
应用 3 次后(针对每个目标标签),我将得到三个 3 列表,每个标签具有最相关的词。这个最终列表将成为我的 TF-IDF 向量化器的新词汇。
def tweeter_tokenizer(tweet):
return tweet.split(' ')
vectorizer = TfidfVectorizer(tokenizer=tweeter_tokenizer, ngram_range=(1,2), stop_words=english_stopwords)
vectorizer.fit(dummy["quote"])
X_train = vectorizer.transform(dummy["quote"])
y_train = dummy["label"]
feature_names = vectorizer.get_feature_names_out()
y_neutral = np.array([0]*X_train.shape[0])
pValue = 0.90
chi_neutral, p_neutral = chi2(X_train, y_neutral)
chi_neutral
chi_neutral对象是:
array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,
nan, nan])
最后,我想创建一个等于每个标签唯一标记 (feature_names) 长度的数据框。我将只保留得分 > pValue 的单词。数据框将显示语料库的总标记中有多少依赖于 class 0(中性)。其余标签将采用相同的方法(1:正,2:负)。
y_df = np.array([0]*X_train.shape[1])
tokens_neutral_dependent = pd.DataFrame({
"tweet_token": feature_names,
"chi2_score" : 1-p_neutral,
"neutral_label": y_df #length = length of feature_names()
})
tokens_neutral_dependent = tokens_neutral_dependent.sort_values(["neutral_label","chi2_score"], ascending=[True,False])
tokens_neutral_dependent = tokens_neutral_dependent[tokens_neutral_dependent["chi2_score"]>pValue]
tokens_neutral_dependent.shape
我不认为在没有附加 classes 的情况下计算卡方统计量真的有意义。代码 chi2(X_train, y_neutral)
是在问“假设 class 和参数是独立的,得到这个分布的几率是多少?”但是您展示的所有示例都是相同的 class。
我建议改为:
chi_neutral, p_neutral = chi2(X_train, y_train)
如果您对特定 classes 之间的卡方统计感兴趣,您可以先将数据集过滤为两个 classes,然后 运行 卡方测试。但这一步不是必须的。
我有以下数据集(我只上传4行的样本,真实的有15000行):
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
from nltk.corpus import stopwords
from sklearn.feature_selection import chi2
quotes=["Sip N Shop Come thru right now Marjais PopularNobodies MMR Marjais SipNShop",
"I do not know about you but My family and I will not take the Covid19 vaccine anytime soon",
"MSignorile Immunizations should be mandatory Period In Oklahoma they will not let kids go to school without them It is dangerous otherwise",
"President Obama spoke in favor of vaccination for children Fox will start telling its viewers to choose against vaccination in 321"]
labels=[0,1,2,0]
dummy = pd.DataFrame({"quote": quotes, "label":labels})
而且我想应用著名的卡方检验来消除每个类别 (0,1,2) 的不相关词的数量。其中 0:中立,1:积极,2:消极。
下面是我的方法(类似于实现的方法here)
简而言之,我创建了一个等于语料库长度的 0 的空列表。 0 代表 y = 0 的第一个标签。对于第二个标签(1=正),我将创建一个空列表 1。同样对于第三个标签 (2=negative).
应用 3 次后(针对每个目标标签),我将得到三个 3 列表,每个标签具有最相关的词。这个最终列表将成为我的 TF-IDF 向量化器的新词汇。
def tweeter_tokenizer(tweet):
return tweet.split(' ')
vectorizer = TfidfVectorizer(tokenizer=tweeter_tokenizer, ngram_range=(1,2), stop_words=english_stopwords)
vectorizer.fit(dummy["quote"])
X_train = vectorizer.transform(dummy["quote"])
y_train = dummy["label"]
feature_names = vectorizer.get_feature_names_out()
y_neutral = np.array([0]*X_train.shape[0])
pValue = 0.90
chi_neutral, p_neutral = chi2(X_train, y_neutral)
chi_neutral
chi_neutral对象是:
array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan])
最后,我想创建一个等于每个标签唯一标记 (feature_names) 长度的数据框。我将只保留得分 > pValue 的单词。数据框将显示语料库的总标记中有多少依赖于 class 0(中性)。其余标签将采用相同的方法(1:正,2:负)。
y_df = np.array([0]*X_train.shape[1])
tokens_neutral_dependent = pd.DataFrame({
"tweet_token": feature_names,
"chi2_score" : 1-p_neutral,
"neutral_label": y_df #length = length of feature_names()
})
tokens_neutral_dependent = tokens_neutral_dependent.sort_values(["neutral_label","chi2_score"], ascending=[True,False])
tokens_neutral_dependent = tokens_neutral_dependent[tokens_neutral_dependent["chi2_score"]>pValue]
tokens_neutral_dependent.shape
我不认为在没有附加 classes 的情况下计算卡方统计量真的有意义。代码 chi2(X_train, y_neutral)
是在问“假设 class 和参数是独立的,得到这个分布的几率是多少?”但是您展示的所有示例都是相同的 class。
我建议改为:
chi_neutral, p_neutral = chi2(X_train, y_train)
如果您对特定 classes 之间的卡方统计感兴趣,您可以先将数据集过滤为两个 classes,然后 运行 卡方测试。但这一步不是必须的。