KMeans clustering - Value error: n_samples=1 should be >= n_cluster

KMeans clustering - Value error: n_samples=1 should be >= n_cluster

我正在用三个具有不同特征的时间序列数据集做实验,我的实验格式如下。

    0.086206438,10
    0.086425551,12
    0.089227066,20
    0.089262508,24
    0.089744425,30
    0.090036815,40
    0.090054172,28
    0.090377569,28
    0.090514071,28
    0.090762872,28
    0.090912691,27

第一列是 timestamp。出于可重复性的原因,我共享数据 here。从第 2 列开始,我想读取当前行并将其与前一行的值进行比较。如果它更大,我会继续比较。如果当前值小于上一行的值,我想将当前值(较小)除以先前值(较大)。因此,这里是代码:

import numpy as np
import matplotlib.pyplot as plt

protocols = {}

types = {"data1": "data1.csv", "data2": "data2.csv", "data3": "data3.csv"}

for protname, fname in types.items():
    col_time,col_window = np.loadtxt(fname,delimiter=',').T
    trailing_window = col_window[:-1] # "past" values at a given index
    leading_window  = col_window[1:]  # "current values at a given index
    decreasing_inds = np.where(leading_window < trailing_window)[0]
    quotient = leading_window[decreasing_inds]/trailing_window[decreasing_inds]
    quotient_times = col_time[decreasing_inds]

    protocols[protname] = {
        "col_time": col_time,
        "col_window": col_window,
        "quotient_times": quotient_times,
        "quotient": quotient,
    }

    plt.figure(); plt.clf()
    plt.plot(quotient_times,quotient, ".", label=protname, color="blue")
    plt.ylim(0, 1.0001)
    plt.title(protname)
    plt.xlabel("time")
    plt.ylabel("quotient")
    plt.legend()
    plt.show()

这产生了以下三点 - 每个 dataset 我分享了一个点。

根据上面给出的代码,从图中的点可以看出,data1是非常一致的,其值在1左右,data2将有两个商(其值将集中大约 0.5 或 0.8),data3 的值集中在两个值附近(大约 0.5 或 0.7)。这样,给定一个新数据点(quotientquotient_times),我想通过构建堆叠这两个转换特征 quotient 的每个数据集来知道它属于哪个 clusterquotient_times。我正在尝试使用 KMeans 聚类,如下所示

from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3, random_state=0)
k_means.fit(quotient)

但这给我一个错误:ValueError: n_samples=1 should be >= n_clusters=3。我们如何解决这个错误?

更新:采样商数据=array([ 0.7 , 0.7 , 0.4973262 , 0.7008547 , 0.71287129, 0.704 , 0.49723757, 0.49723757, 0.70676692, 0.5 , 0.5 , 0.70754717, 0.5 , 0.49723757, 0.70322581, 0.5 , 0.49723757, 0.49723757, 0.5 , 0.49723757])

请尝试下面的代码。简要说明我所做的事情:

首先,我构建了数据集 sample = np.vstack((quotient_times, quotient)).T 并对其进行了标准化,这样聚类就会变得更加容易。接下来,我应用了 DBScan 和多个超参数(eps 和 min_samples),直到我找到能更好地分离点的那个。最后,我用各自的标签绘制了数据,因为您使用的是二维数据,所以很容易看出聚类的效果。

import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler

types = {"data1": "data1.csv", "data2": "data2.csv", "data3": "data3.csv"}

dataset = np.empty((0, 2))

for protname, fname in types.items():
    col_time,col_window = np.loadtxt(fname,delimiter=',').T

    trailing_window = col_window[:-1] # "past" values at a given index
    leading_window  = col_window[1:]  # "current values at a given index
    decreasing_inds = np.where(leading_window < trailing_window)[0]
    quotient = leading_window[decreasing_inds]/trailing_window[decreasing_inds]
    quotient_times = col_time[decreasing_inds]

    sample = np.vstack((quotient_times, quotient)).T
    dataset = np.append(dataset, sample, axis=0)

scaler = StandardScaler()
dataset = scaler.fit_transform(dataset)

k_means = DBSCAN(eps=0.6, min_samples=1)
k_means.fit(dataset)

colors = [i for i in k_means.labels_]

plt.figure();
plt.title('Dataset 1,2,3')
plt.xlabel("time")
plt.ylabel("quotient")
plt.scatter(dataset[:, 0], dataset[:, 1], c=colors)
plt.legend()
plt.show()

照原样,您的 quotient 变量现在是 一个单一 样本;在这里我得到一个不同的错误信息,可能是由于 Python/scikit-learn 版本不同,但本质是一样的:

import numpy as np
quotient = np.array([ 0.7 , 0.7 , 0.4973262 , 0.7008547 , 0.71287129, 0.704 , 0.49723757, 0.49723757, 0.70676692, 0.5 , 0.5 , 0.70754717, 0.5 , 0.49723757, 0.70322581, 0.5 , 0.49723757, 0.49723757, 0.5 , 0.49723757])
quotient.shape
# (20,)

from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3, random_state=0)
k_means.fit(quotient)

这会产生以下错误:

ValueError: Expected 2D array, got 1D array instead:
array=[0.7        0.7        0.4973262  0.7008547  0.71287129 0.704
 0.49723757 0.49723757 0.70676692 0.5        0.5        0.70754717
 0.5        0.49723757 0.70322581 0.5        0.49723757 0.49723757
 0.5        0.49723757].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.

尽管措辞不同,但与您的没有什么不同 - 本质上它表示您的数据看起来像一个样本。

遵循第一个建议(即考虑到 quotient 包含单个 特征 (列)解决了问题:

k_means.fit(quotient.reshape(-1,1))
# result
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
    n_clusters=3, n_init=10, n_jobs=None, precompute_distances='auto',
    random_state=0, tol=0.0001, verbose=0)