如何在机器学习中预测 sigmoid 函数的结果
How to predict result of sigmoid function in Machine Learning
我正在上 Coursera 机器学习课程,我对 sigmoid 函数有点困惑。
我像这样实现了 sigmoid 函数:
g = 1 ./ (1+e.^(-z));
并编写了一个函数来预测结果,看起来像
p = sigmoid(X*theta) >= 0.5
问题说
"For a student with an Exam 1 score
of 45 and an Exam 2 score of 85, you should expect to see an admission
probability of 0.776"
但我不确定如何将这两个 x 值插入到我创建的函数中。
如果 theta 是 0.218,那么 45 分和 85 分的考试成绩如何得出 0.776 的概率?有人可以解释一下吗?
谢谢
概率由sigmoid函数给出,
p = sigmoid(X*theta)
# Since there are two inputs, the model will have 2 weights and a bias.
p = sigmoid(0.45*w1+0.85*w2+b)
# The actual output is given by
y = 0.776
# Loss function
loss = (p-y)^2
# Find the weights by minimizing the loss function using gradient descent.
我正在上 Coursera 机器学习课程,我对 sigmoid 函数有点困惑。
我像这样实现了 sigmoid 函数:
g = 1 ./ (1+e.^(-z));
并编写了一个函数来预测结果,看起来像
p = sigmoid(X*theta) >= 0.5
问题说
"For a student with an Exam 1 score
of 45 and an Exam 2 score of 85, you should expect to see an admission
probability of 0.776"
但我不确定如何将这两个 x 值插入到我创建的函数中。
如果 theta 是 0.218,那么 45 分和 85 分的考试成绩如何得出 0.776 的概率?有人可以解释一下吗?
谢谢
概率由sigmoid函数给出,
p = sigmoid(X*theta)
# Since there are two inputs, the model will have 2 weights and a bias.
p = sigmoid(0.45*w1+0.85*w2+b)
# The actual output is given by
y = 0.776
# Loss function
loss = (p-y)^2
# Find the weights by minimizing the loss function using gradient descent.