访问按 KMeans 集群分组的数据的有效方法
efficient way of accessing data grouped by KMeans clusters
我正在尝试围绕每个质心绘制圆圈,半径延伸到属于每个簇的最远点。现在我绘制的圆圈的半径延伸到整个训练数据集中离聚类中心最远的点
这是我的代码:
def KMeansModel(n):
pca = PCA(n_components=2)
reduced_train_data = pca.fit_transform(train_data)
KM = KMeans(n_clusters=n)
KM.fit(reduced_train_data)
plt.plot(reduced_train_data[:, 0], reduced_train_data[:, 1], 'k.', markersize=2)
centroids = KM.cluster_centers_
# Plot the centroids as a red X
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', color='r')
for i in centroids:
print np.max(metrics.pairwise_distances(i, reduced_train_data))
plt.gca().add_artist(plt.Circle(i, np.max(metrics.pairwise_distances(i, reduced_train_data)), fill=False))
plt.show()
out = [KMeansModel(n) for n in np.arange(1,16,1)]
当你这样做时
metrics.pairwise_distances(i, reduced_train_data)
你计算所有训练点的距离,而不仅仅是相关class的训练点。为了从训练数据中找到classind
对应的点的位置,可以做
np.where(KM.labels_==ind)[0]
因此,在for循环内部
for i in centroids:
您需要从特定的 class 访问训练点。这将完成工作:
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn import metrics
import matplotlib.pyplot as plt
import numpy as np
def KMeansModel(n):
pca = PCA(n_components=2)
reduced_train_data = pca.fit_transform(train_data)
KM = KMeans(n_clusters=n)
KM.fit(reduced_train_data)
plt.plot(reduced_train_data[:, 0], reduced_train_data[:, 1], 'k.', markersize=2)
centroids = KM.cluster_centers_
# Plot the centroids as a red X
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', color='r')
for ind,i in enumerate(centroids):
class_inds=np.where(KM.labels_==ind)[0]
max_dist=np.max(metrics.pairwise_distances(i, reduced_train_data[class_inds]))
print(max_dist)
plt.gca().add_artist(plt.Circle(i, max_dist, fill=False))
plt.show()
out = [KMeansModel(n) for n in np.arange(1,16,1)]
这是我使用代码得到的数字之一:
我正在尝试围绕每个质心绘制圆圈,半径延伸到属于每个簇的最远点。现在我绘制的圆圈的半径延伸到整个训练数据集中离聚类中心最远的点
这是我的代码:
def KMeansModel(n):
pca = PCA(n_components=2)
reduced_train_data = pca.fit_transform(train_data)
KM = KMeans(n_clusters=n)
KM.fit(reduced_train_data)
plt.plot(reduced_train_data[:, 0], reduced_train_data[:, 1], 'k.', markersize=2)
centroids = KM.cluster_centers_
# Plot the centroids as a red X
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', color='r')
for i in centroids:
print np.max(metrics.pairwise_distances(i, reduced_train_data))
plt.gca().add_artist(plt.Circle(i, np.max(metrics.pairwise_distances(i, reduced_train_data)), fill=False))
plt.show()
out = [KMeansModel(n) for n in np.arange(1,16,1)]
当你这样做时
metrics.pairwise_distances(i, reduced_train_data)
你计算所有训练点的距离,而不仅仅是相关class的训练点。为了从训练数据中找到classind
对应的点的位置,可以做
np.where(KM.labels_==ind)[0]
因此,在for循环内部
for i in centroids:
您需要从特定的 class 访问训练点。这将完成工作:
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn import metrics
import matplotlib.pyplot as plt
import numpy as np
def KMeansModel(n):
pca = PCA(n_components=2)
reduced_train_data = pca.fit_transform(train_data)
KM = KMeans(n_clusters=n)
KM.fit(reduced_train_data)
plt.plot(reduced_train_data[:, 0], reduced_train_data[:, 1], 'k.', markersize=2)
centroids = KM.cluster_centers_
# Plot the centroids as a red X
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', color='r')
for ind,i in enumerate(centroids):
class_inds=np.where(KM.labels_==ind)[0]
max_dist=np.max(metrics.pairwise_distances(i, reduced_train_data[class_inds]))
print(max_dist)
plt.gca().add_artist(plt.Circle(i, max_dist, fill=False))
plt.show()
out = [KMeansModel(n) for n in np.arange(1,16,1)]
这是我使用代码得到的数字之一: