Erlang 中的感知器在训练后不学习

Perceptron in Erlang not learning after training

我不认为我的问题与 this one 重复,因为我在实施过程中已经存在偏见。

我尝试在 Erlang 中实现一个感知器及其识别线性斜率的训练。问题是它没有得到适当的训练。它猜测的值在 50 个 epoch 后仍然有大约 50% 的正确率。

起始权重在列表[X_weight, Y_Weight, Bias_weight]中提供,训练集在另一个列表[X,Y,Desired_guess]中提供,其中 X 和 Y 是整数,如果 Desired_guess 是 -1坐标在线下,如果在线上则为 1。

首先是新权重的计算:

% Exported starting clause
% Inputs are - List of input values for one perceptron ([X,Y,Bias]), A list of weights corresponding to the inputs [X_weight, Y_weight, Bias_weight], the learning constant and the error (Desired-Guess)

train_perceptron([InputsH|InputsT], [WeightsH|WeightsT], Learning_constant, Error) ->
    train_perceptron(InputsT, WeightsT, Learning_constant, Error, 
        [WeightsH + (Learning_constant * Error) * InputsH]).

% Not exported clause called by train_perceptron/4 This also has a list of the new adjusted weights.
% When the tail of input lists are empty lists it is the last value, and thereby the Bias
train_perceptron([InputsH|[]], [WeightsH|[]], Learning_constant, Error, Adjusted_weights) ->
    train_perceptron([], [], Learning_constant, Error,
        Adjusted_weights ++ [WeightsH + Learning_constant * Error]);

%Normal cases, calcualting the new weights and add them to the Adjusted_weights
train_perceptron([InputsH|InputsT], [WeightsH|WeightsT], Learning_constant,      Error, Adjusted_weights) ->
    train_perceptron(InputsT, WeightsT,Learning_constant, Error, 
    Adjusted_weights ++ [WeightsH + (Learning_constant * Error) * InputsH]);

%Base case the lists are empty, no more to do. Return the Adjusted_weights
train_perceptron([], [],_, _, Adjusted_weights) ->
    Adjusted_weights.

这是调用train_perceptron函数的函数

line_trainer(Weights,[],_) ->
     Weights;
line_trainer(Weights, [{X,Y,Desired}|TST], Learning_constant)->
     Bias = 1,
     Error = Desired - feedforward([X,Y,Bias],Weights),
     Adjusted_weights = train_perceptron([X,Y,Bias], Weights, Learning_constant, Error),
     line_trainer(Adjusted_weights, TST, Learning_constant).

一个解决方案可能是,如果有人为我提供了这种函数的训练集,每个时期的三个起始权重和输出。这可以帮助我自己调试。

这确实有效。我提供的训练集太小了。使用更大的训练集和大约 20 个 epoch,全局误差收敛到 0。