python 中的神经网络非线性时间序列 Narx 模型

neural-network non linear time series Narx model in python

我正在尝试创建一个神经网络非线性时间序列 NARX 模型

算法我的输入是

1-2D 矩阵 (x,y)

2-另一个二维矩阵(x,y)

目标是此二维矩阵 (x,y) 中的实际精确值

首先,我搜索了这个网络,并使用 MATLAB 对这个网络进行了建模,并且我有一个

好的结果我会在DOWN

中展示

无论如何,我想在 PYTHON

中实现这个模型“NARX MODEL

我搜索了(NARX 模型)的算法,但没有得到我想要的结果:

1-有人给我任何 参考资料网站书籍视频系列

2-或向我展示 方法 以正确搜索此特定任务

3-或者给我一个步骤,让我在 python 等价于 到 **MATLAB" NARX

源代码和函数"**

这是 MATLAB 代码:

    % Solve an Autoregression Problem with External Input with a NARX   Neural Network              
    % Script generated by NTSTOOL
    % Created Wed Nov 09 20:28:50 EET 2016
    %
    % This script assumes these variables are defined:
    %
    %   input- input time series.
    %   output- feedback time series.

    inputSeries = tonndata(input,true,false);
    targetSeries = tonndata(output,true,false);

    % Create a Nonlinear Autoregressive Network with External Input
    inputDelays = 1:2;
    feedbackDelays = 1:2;
    hiddenLayerSize = 10;
    net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);

    % Choose Input and Feedback Pre/Post-Processing Functions
    % Settings for feedback input are automatically applied to feedback output
    % For a list of all processing functions type: help nnprocess
    % Customize input parameters at: net.inputs{i}.processParam
    % Customize output parameters at: net.outputs{i}.processParam
    net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
    net.inputs{2}.processFcns = {'removeconstantrows','mapminmax'};

    % Prepare the Data for Training and Simulation
    % The function PREPARETS prepares timeseries data for a particular network,
    % shifting time by the minimum amount to fill input states and layer states.
    % Using PREPARETS allows you to keep your original time series data unchanged, while
    % easily customizing it for networks with differing numbers of delays, with
    % open loop or closed loop feedback modes.
    [inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,   {},targetSeries);

    % Setup Division of Data for Training, Validation, Testing
    % The function DIVIDERAND randomly assigns target values to training,
    % validation and test sets during training.
    % For a list of all data division functions type: help nndivide
    net.divideFcn = 'dividerand';  % Divide data randomly
    % The property DIVIDEMODE set to TIMESTEP means that targets are divided
    % into training, validation and test sets according to timesteps.
    % For a list of data division modes type: help nntype_data_division_mode
    net.divideMode = 'value';  % Divide up every value
     net.divideParam.trainRatio = 70/100;
     net.divideParam.valRatio = 15/100;
     net.divideParam.testRatio = 15/100;

     % Choose a Training Function
     % For a list of all training functions type: help nntrain
     % Customize training parameters at: net.trainParam
     net.trainFcn = 'trainlm';  % Levenberg-Marquardt

     % Choose a Performance Function
     % For a list of all performance functions type: help nnperformance
     % Customize performance parameters at: net.performParam
     net.performFcn = 'mse';  % Mean squared error

    % Choose Plot Functions
    % For a list of all plot functions type: help nnplot
    % Customize plot parameters at: net.plotParam
    net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...
   'ploterrcorr', 'plotinerrcorr'};

    % Train the Network
   [net,tr] = train(net,inputs,targets,inputStates,layerStates);

   % Test the Network
   outputs = net(inputs,inputStates,layerStates);
   errors = gsubtract(targets,outputs);
    performance = perform(net,targets,outputs)

    % Recalculate Training, Validation and Test Performance
   trainTargets = gmultiply(targets,tr.trainMask);
  valTargets = gmultiply(targets,tr.valMask);
    testTargets = gmultiply(targets,tr.testMask);
     trainPerformance = perform(net,trainTargets,outputs)
      valPerformance = perform(net,valTargets,outputs)
      testPerformance = perform(net,testTargets,outputs)

         % View the Network
       view(net)

          % Plots
         % Uncomment these lines to enable various plots.
          %figure, plotperform(tr)
       %figure, plottrainstate(tr)
          %figure, plotregression(targets,outputs)
        %figure, plotresponse(targets,outputs)
      %figure, ploterrcorr(errors)
     %figure, plotinerrcorr(inputs,errors)

            % Closed Loop Network
        % Use this network to do multi-step prediction.
       % The function CLOSELOOP replaces the feedback input with a direct
          % connection from the outout layer.
        netc = closeloop(net);
         netc.name = [net.name ' - Closed Loop'];
         view(netc)
         [xc,xic,aic,tc] = preparets(netc,inputSeries,{},targetSeries);
      yc = netc(xc,xic,aic);
          closedLoopPerformance = perform(netc,tc,yc)

          % Early Prediction Network
           % For some applications it helps to get the prediction a timestep        early.
       % The original network returns predicted y(t+1) at the same time it is given y(t+1).
         % For some applications such as decision making, it would help to have predicted
         % y(t+1) once y(t) is available, but before the actual y(t+1) occurs.
       % The network can be made to return its output a timestep early by removing one delay
       % so that its minimal tap delay is now 0 instead of 1.  The new network returns the
     % same outputs as the original network, but outputs are shifted left one timestep.
    nets = removedelay(net);
       nets.name = [net.name ' - Predict One Step Ahead'];
       view(nets)
       [xs,xis,ais,ts] = preparets(nets,inputSeries,{},targetSeries);
       ys = nets(xs,xis,ais);
      earlyPredictPerformance = perform(nets,ts,ys)

这里是输入

this is the x coordinates

1 4 7 9 11 17 14 16 18 19 

this is the y coordinates

1 2 4 6 7  8  10 10 13 14

this is another x coordinates

1 7 10 13 16 18 19 23 24 25

this is another y coordinates

1 5 7 9 12 14 16 17 19 20

这里是目标

this is the actual x coordinates

1 4 5 8 9 15 17 18 20 22

this is the actual y coordinates

1 1 4 7 8 10 13 14 18 20

并且对于两个输入中的这个大错误与输出相比,结果已经足够好了 但是通过改变 nerouns 我们可以增强这个输出

[5.00163468043085;3.99820942369434]

[8.00059395052246;6.99872447652641] 

[11.5625431537178;8.00040094120297] 

[14.9982223917152;9.24359668634943] 

[19.3511330333522;13.0001065644369] 

[18.4627579643821;13.9999624796494] 

[20.0004073095041;17.9997197490528] 

[22.0004822590849;19.9997852867243]

希望足够清晰

提前致谢

PyNeurGen 是您问题的可能解决方案。 它是一个 python 库,支持多种网络架构。

该库还包含前馈网络的演示。

要使用 NARX net,您可以使用以下定义:NARX Net PyNeurGen