如何对 excel 文件中包含的文本执行聚类?
How to perform clustering on text contained in an excel file?
我正在尝试使用 excel 文件中包含的文本创建群集,但出现错误 "AttributeError: 'int' object has no attribute 'lower'"。
Sample.xlsx 是一个包含如下数据的文件:
我创建了一个名为 corpus 的列表,其中每一行都有唯一的文本,我在矢量化语料库时遇到了这个问题。
'''python
import pandas as pd
import numpy as np
data=pd.read_excel('sample.xlsx')
idea=data.iloc[:,0:1] #Selecting the first column that has text.
#Converting the column of data from excel sheet into a list of documents, where each document corresponds to a group of sentences.
corpus=[]
for index,row in idea.iterrows():
corpus.append(row['_index_text_data'])
#Count Vectoriser then tidf transformer
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(corpus) #ERROR AFTER EXECUTING THESE #LINES
#vectorizer.get_feature_names()
#print(X.toarray())
from sklearn.feature_extraction.text import TfidfTransformer
transformer = TfidfTransformer(smooth_idf=False)
tfidf = transformer.fit_transform(X)
print(tfidf.shape )
from sklearn.cluster import KMeans
num_clusters = 5 #Change it according to your data.
km = KMeans(n_clusters=num_clusters)
km.fit(tfidf)
clusters = km.labels_.tolist()
idea={'Idea':corpus, 'Cluster':clusters} #Creating dict having doc with the corresponding cluster number.
frame=pd.DataFrame(idea,index=[clusters], columns=['Idea','Cluster']) # Converting it into a dataframe.
print("\n")
print(frame) #Print the doc with the labeled cluster number.
print("\n")
print(frame['Cluster'].value_counts()) #Print the counts of doc belonging `#to each cluster.`
预期结果:
错误:"AttributeError: 'int' object has no attribute 'lower'"
如果有人正在寻找这个问题的答案,那么只需在 for 循环之后在上面的代码中使用“'corpus = [str (item) for item in corpus]'”将整个语料库转换为文本。
新代码:
corpus=[]
for index,row in idea.iterrows():
corpus.append(row['_index_text_data'])
corpus = [str (item) for item in corpus]
我正在尝试使用 excel 文件中包含的文本创建群集,但出现错误 "AttributeError: 'int' object has no attribute 'lower'"。
Sample.xlsx 是一个包含如下数据的文件:
我创建了一个名为 corpus 的列表,其中每一行都有唯一的文本,我在矢量化语料库时遇到了这个问题。
'''python
import pandas as pd
import numpy as np
data=pd.read_excel('sample.xlsx')
idea=data.iloc[:,0:1] #Selecting the first column that has text.
#Converting the column of data from excel sheet into a list of documents, where each document corresponds to a group of sentences.
corpus=[]
for index,row in idea.iterrows():
corpus.append(row['_index_text_data'])
#Count Vectoriser then tidf transformer
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(corpus) #ERROR AFTER EXECUTING THESE #LINES
#vectorizer.get_feature_names()
#print(X.toarray())
from sklearn.feature_extraction.text import TfidfTransformer
transformer = TfidfTransformer(smooth_idf=False)
tfidf = transformer.fit_transform(X)
print(tfidf.shape )
from sklearn.cluster import KMeans
num_clusters = 5 #Change it according to your data.
km = KMeans(n_clusters=num_clusters)
km.fit(tfidf)
clusters = km.labels_.tolist()
idea={'Idea':corpus, 'Cluster':clusters} #Creating dict having doc with the corresponding cluster number.
frame=pd.DataFrame(idea,index=[clusters], columns=['Idea','Cluster']) # Converting it into a dataframe.
print("\n")
print(frame) #Print the doc with the labeled cluster number.
print("\n")
print(frame['Cluster'].value_counts()) #Print the counts of doc belonging `#to each cluster.`
预期结果:
错误:"AttributeError: 'int' object has no attribute 'lower'"
如果有人正在寻找这个问题的答案,那么只需在 for 循环之后在上面的代码中使用“'corpus = [str (item) for item in corpus]'”将整个语料库转换为文本。
新代码:
corpus=[]
for index,row in idea.iterrows():
corpus.append(row['_index_text_data'])
corpus = [str (item) for item in corpus]