为用于图像识别的神经网络选择变量
Choosing variables for neural network used for image recognition
我有一个训练集,包含 6 块不同多米诺骨牌的 89 张图像和一组 "control" 婴儿 - 全部分为 7 组。因此输出 y 为 7。每张图像都是 100x100 并且是黑白的,因此 X 为 100.000。
我正在使用 Andrew Ng 使用 Octave 的 coursera 课程中的 1 个隐藏层神经网络代码。稍作修改。
我首先对 3 个不同的组(两个多米诺骨牌,一个婴儿)进行了尝试,它成功地获得了接近 100% 的准确率。我现在已将其增加到 7 个不同的图像组。准确性下降了很多,除了婴儿照片(与多米诺骨牌有很大不同)之外,几乎没有什么是正确的。
我已经尝试了 10 个不同的 lambda 值、5-20 之间的 10 个不同的神经元数以及尝试了不同的迭代次数并将其与成本和准确度作图以找到最合适的。
我也尝试过特征标准化(在下面的代码中注释掉)但没有帮助。
这是我使用的代码:
% Initialization
clear ; close all; clc; more off;
pkg load image;
fprintf('Running Domino Identifier ... \n');
%iteration_vector = [100, 300, 1000, 3000, 10000, 30000];
%accuracies = [];
%costs = [];
%for iterations_i = 1:length(iteration_vector)
# INPUTS
input_layer_size = 10000; % 100x100 Input Images of Digits
hidden_layer_size = 50; % Hidden units
num_labels = 7; % Number of different outputs
iterations = 100000; % Number of iterations during training
lambda = 0.13;
%hidden_layer_size = hidden_layers(hidden_layers_i);
%lambda = lambdas(lambda_i)
%iterations = %iteration_vector(iterations_i)
[X,y] = loadTrainingData(num_labels);
%[X_norm, mu, sigma] = featureNormalize(X_unnormed);
%X = X_norm;
initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels);
initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)];
[J grad] = nnCostFunction(initial_nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda);
fprintf('\nTraining Neural Network... \n')
% After you have completed the assignment, change the MaxIter to a larger
% value to see how more training helps.
options = optimset('MaxIter', iterations);
% Create "short hand" for the cost function to be minimized
costFunction = @(p) nnCostFunction(p, input_layer_size, hidden_layer_size, num_labels, X, y, lambda);
% Now, costFunction is a function that takes in only one argument (the
% neural network parameters)
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options);
% Obtain Theta1 and Theta2 back from nn_params
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
displayData(Theta1(:, 2:end));
[predictionData, images] = loadTrainingData(num_labels);
[h2_training, pred_training] = predict(Theta1, Theta2, predictionData);
fprintf('\nTraining Accuracy: %f\n', mean(double(pred_training' == y)) * 100);
%if length(accuracies) > 0
% accuracies = [accuracies; mean(double(pred_training' == y))];
%else
% accuracies = [mean(double(pred_training' == y))];
%end
%last_cost = cost(length(cost));
%if length(costs) > 0
% costs = [costs; last_cost];
%else
% costs = [last_cost];
%end
%endfor % Testing samples
fprintf('Loading prediction images');
[predictionData, images] = loadPredictionData();
[h2, pred] = predict(Theta1, Theta2, predictionData)
for i = 1:length(pred)
figure;
displayData(predictionData(i, :));
title (strcat(translateIndexToTile(pred(i)), " Certainty:", num2str(max(h2(i, :))*100)));
pause;
endfor
%y = provideAnswers(im_vector);
我现在的问题是:
我的数字 "off" X 和其他人有很大的不同吗?
我应该怎么做才能改进这个神经网络?
如果我进行特征归一化,我是否需要在某个地方再次将数字乘回到 0-255 范围?
What should I do to improve this Neural Network?
使用多层(例如 5 层)的卷积神经网络 (CNN)。对于视觉问题,CNN 的表现远远优于 MLP。在这里,您使用的是具有单个隐藏层的 MLP。有可能这个网络在 7 classes 的图像问题上表现不佳。一个问题是您拥有的训练数据量。通常,我们希望每个 class.
至少有数百个样本
If I do feature normalization, do I need to multiply the numbers back to the 0-255 range again somewhere?
一般来说,不用于class化。归一化可以看作是一个预处理步骤。但是,如果您处理图像重建之类的问题,那么您将需要在最后转换回原始域。
我有一个训练集,包含 6 块不同多米诺骨牌的 89 张图像和一组 "control" 婴儿 - 全部分为 7 组。因此输出 y 为 7。每张图像都是 100x100 并且是黑白的,因此 X 为 100.000。
我正在使用 Andrew Ng 使用 Octave 的 coursera 课程中的 1 个隐藏层神经网络代码。稍作修改。
我首先对 3 个不同的组(两个多米诺骨牌,一个婴儿)进行了尝试,它成功地获得了接近 100% 的准确率。我现在已将其增加到 7 个不同的图像组。准确性下降了很多,除了婴儿照片(与多米诺骨牌有很大不同)之外,几乎没有什么是正确的。
我已经尝试了 10 个不同的 lambda 值、5-20 之间的 10 个不同的神经元数以及尝试了不同的迭代次数并将其与成本和准确度作图以找到最合适的。
我也尝试过特征标准化(在下面的代码中注释掉)但没有帮助。
这是我使用的代码:
% Initialization
clear ; close all; clc; more off;
pkg load image;
fprintf('Running Domino Identifier ... \n');
%iteration_vector = [100, 300, 1000, 3000, 10000, 30000];
%accuracies = [];
%costs = [];
%for iterations_i = 1:length(iteration_vector)
# INPUTS
input_layer_size = 10000; % 100x100 Input Images of Digits
hidden_layer_size = 50; % Hidden units
num_labels = 7; % Number of different outputs
iterations = 100000; % Number of iterations during training
lambda = 0.13;
%hidden_layer_size = hidden_layers(hidden_layers_i);
%lambda = lambdas(lambda_i)
%iterations = %iteration_vector(iterations_i)
[X,y] = loadTrainingData(num_labels);
%[X_norm, mu, sigma] = featureNormalize(X_unnormed);
%X = X_norm;
initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels);
initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)];
[J grad] = nnCostFunction(initial_nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, lambda);
fprintf('\nTraining Neural Network... \n')
% After you have completed the assignment, change the MaxIter to a larger
% value to see how more training helps.
options = optimset('MaxIter', iterations);
% Create "short hand" for the cost function to be minimized
costFunction = @(p) nnCostFunction(p, input_layer_size, hidden_layer_size, num_labels, X, y, lambda);
% Now, costFunction is a function that takes in only one argument (the
% neural network parameters)
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options);
% Obtain Theta1 and Theta2 back from nn_params
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
displayData(Theta1(:, 2:end));
[predictionData, images] = loadTrainingData(num_labels);
[h2_training, pred_training] = predict(Theta1, Theta2, predictionData);
fprintf('\nTraining Accuracy: %f\n', mean(double(pred_training' == y)) * 100);
%if length(accuracies) > 0
% accuracies = [accuracies; mean(double(pred_training' == y))];
%else
% accuracies = [mean(double(pred_training' == y))];
%end
%last_cost = cost(length(cost));
%if length(costs) > 0
% costs = [costs; last_cost];
%else
% costs = [last_cost];
%end
%endfor % Testing samples
fprintf('Loading prediction images');
[predictionData, images] = loadPredictionData();
[h2, pred] = predict(Theta1, Theta2, predictionData)
for i = 1:length(pred)
figure;
displayData(predictionData(i, :));
title (strcat(translateIndexToTile(pred(i)), " Certainty:", num2str(max(h2(i, :))*100)));
pause;
endfor
%y = provideAnswers(im_vector);
我现在的问题是:
我的数字 "off" X 和其他人有很大的不同吗?
我应该怎么做才能改进这个神经网络?
如果我进行特征归一化,我是否需要在某个地方再次将数字乘回到 0-255 范围?
What should I do to improve this Neural Network?
使用多层(例如 5 层)的卷积神经网络 (CNN)。对于视觉问题,CNN 的表现远远优于 MLP。在这里,您使用的是具有单个隐藏层的 MLP。有可能这个网络在 7 classes 的图像问题上表现不佳。一个问题是您拥有的训练数据量。通常,我们希望每个 class.
至少有数百个样本If I do feature normalization, do I need to multiply the numbers back to the 0-255 range again somewhere?
一般来说,不用于class化。归一化可以看作是一个预处理步骤。但是,如果您处理图像重建之类的问题,那么您将需要在最后转换回原始域。