为什么训练后我的体重是这种奇怪的格式?

Why are my weights after training in this strange format?

我目前正在尝试学习逻辑回归,并且坚持在训练后根据权重绘制一条线。我期待一个包含 3 个值的数组,但是当我打印权重来检查它们时,我得到(每次都有不同的值,但格式相同):

[array([[ 0.42433906], [-0.67847246]], dtype=float32) array([-0.06681705], dtype=float32)]

我的问题是,为什么权重采用这种格式的 2 个数组,而不是 1 个长度为 3 的数组?我如何解释这些权重以便绘制分隔线?

这是我的代码:

from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.regularizers import L1L2
import random
import numpy as np

# return the array data of shape (m, 2) and the array labels of shape (m, 1)
def get_random_data(w, b, mu, sigma, m): # slope, y-intercept, mean of the data, standard deviation, size of arrays
  data = np.empty((m, 2))
  labels = np.empty((m, 1))

  # fill the arrays with random data
  for i in range(m):
    c = (random.random() > 0.5) # 0 with probability 1/2 and 1 with probability 1/2
    n = random.normalvariate(mu, sigma) # noise using normal distribution
    x_1 = random.random() # uniform distribution on [0, 1)
    x_2 = w * x_1 + b + (-1)**c * n

    labels[i] = c
    data[i][0] = x_1
    data[i][1] = x_2


  # the train set is the first 80% of our data, and the test set is the following 20%
  train_length = int(round(m * 0.8, 1)) 

  train_data = np.empty((train_length, 2))
  train_labels = np.empty((train_length, 1))
  test_data = np.empty((m - train_length, 2))
  test_labels = np.empty((m - train_length, 1))

  for i in range(train_length):
    train_data[i] = data[i]
    train_labels[i] = labels[i]

  for i in range(train_length, m):
    test_data[i - train_length] = data[i]
    test_labels[i - train_length] = labels[i]

  return (train_data, train_labels), (test_data, test_labels)

(train_data, train_labels), (test_data, test_labels) = get_random_data(2,3,100,100,200)

model = Sequential()
model.add(Dense(train_labels.shape[1],
                activation='sigmoid',
                kernel_regularizer=L1L2(l1=0.0, l2=0.1),
                input_dim=(train_data.shape[1]))) 
model.compile(optimizer='sgd',
              loss='binary_crossentropy',
              metrics=['accuracy'])

model.fit(train_data, train_labels, epochs=100, validation_data=(test_data,test_labels))

weights = np.asarray(model.get_weights())
print("the weights are " , weights)

数组的第一个索引显示系数的权重,第二个数组显示偏差。

所以你有如下等式。

h(x) = 0.42433906x1 + -0.67847246x2 + -0.06681705

逻辑回归采用此等式并应用 sigmoid 函数将结果压缩在 0-1 之间。

所以如果你想画一条直线方程,你可以像我上面解释的那样用返回的权重来做。