不使用 Sklearn 的 KNN

KNN without using Sklearn

我在不使用任何库的情况下使用 knn。问题是标签是数字

label = [1.5171, 1.7999, 2.4493, 2.8622, 2.9961, 3.6356, 3.7742, 5.8069, 7.1357 etc..]}

每个标签都有一个值 我想预测新数据的标签,但如果每个标签都有一个值,我应该如何选择获胜标签?

prediction = max(set(label_neighbors), key=label_neighbors.count)

我猜你想学习 KNN 的机制,对吧。请参阅下面的示例代码。这应该可以满足您的要求。

import numpy as np
import scipy.spatial
from collections import Counter

# loading the Iris-Flower dataset from Sklearn
from sklearn import datasets
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state = 42, test_size = 0.2)

class KNN:
    def __init__(self, k):
        self.k = k

    def fit(self, X, y):
        self.X_train = X
        self.y_train = y

    def distance(self, X1, X2):
        distance = scipy.spatial.distance.euclidean(X1, X2)

    def predict(self, X_test):
        final_output = []
        for i in range(len(X_test)):
            d = []
            votes = []
            for j in range(len(X_train)):
                dist = scipy.spatial.distance.euclidean(X_train[j] , X_test[i])
                d.append([dist, j])
            d.sort()
            d = d[0:self.k]
            for d, j in d:
                votes.append(y_train[j])
            ans = Counter(votes).most_common(1)[0][0]
            final_output.append(ans)

        return final_output

    def score(self, X_test, y_test):
        predictions = self.predict(X_test)
        return (predictions == y_test).sum() / len(y_test)

clf = KNN(3)
clf.fit(X_train, y_train)
prediction = clf.predict(X_test)
for i in prediction:
    print(i, end= ' ')

prediction == y_test

clf.score(X_test, y_test)


# Result:
# 1.0

嗯,看那个!我们得到了 100%!不错,一点都不差!!

参考:

https://medium.com/analytics-vidhya/implementing-k-nearest-neighbours-knn-without-using-scikit-learn-3905b4decc3c