处理逻辑回归的 NaN(缺失)值 - 最佳实践?
Dealing with NaN (missing) values for Logistic Regression- Best practices?
我正在处理患者信息的数据集,并尝试使用 MATLAB 从数据中计算倾向得分。删除具有许多缺失值的特征后,我仍然有几个缺失 (NaN) 值。
由于这些缺失值,当我尝试使用以下 Matlab 代码(来自 Andrew Ng 的 Coursera 机器学习 class) :
[m, n] = size(X);
X = [ones(m, 1) X];
initial_theta = ones(n+1, 1);
[cost, grad] = costFunction(initial_theta, X, y);
options = optimset('GradObj', 'on', 'MaxIter', 400);
[theta, cost] = ...
fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);
注意:sigmoid 和 costfunction 是我为整体易用性而创建的工作函数。
如果我将所有 NaN 值替换为 1 或 0,计算可以顺利进行。但是我不确定这是否是处理此问题的最佳方法,我也想知道我应该选择什么替换值(通常)以获得对缺失数据执行逻辑回归的最佳结果。 是否 benefits/drawbacks 使用特定数字(0 或 1 或其他数字)来替换我的数据中的上述缺失值?
注意:我也把所有的特征值归一化到0-1的范围内。
如果您对此问题有任何见解,我们将不胜感激。谢谢
要处理缺失数据,您可以使用以下三个选项之一:
如果缺失值的实例不多,可以删除缺失值的实例。
如果您有很多特征并且可以承受丢失一些信息,请删除具有缺失值的整个特征。
最好的方法是用一些值(均值、中位数)来代替缺失值。您可以计算该特征的其余训练示例的平均值,并用平均值填充所有缺失值。这非常有效,因为平均值保持在数据分布中。
注意:用均值替换缺失值时,仅使用训练集计算均值。此外,存储该值并使用它来更改测试集中的缺失值。
如果您使用 0 或 1 来替换所有缺失值,则数据可能会出现偏差,因此最好用所有其他值的平均值来替换缺失值。
如前所述,这是人们处理的一个普遍问题,与编程平台无关。它被称为"missing data imputation"。
将所有缺失值强制为特定数字当然有缺点。根据数据的分布,它可能会很激烈,例如,在零多于一的二进制稀疏数据中将所有缺失值设置为 1。
幸运的是,MATLAB 有一个名为 knnimpute
的函数,它可以根据最近的邻居来估计缺失的数据点。
根据我的经验,我经常发现 knnimpute
很有用。但是,当您的数据中缺少太多站点时,它可能会达不到要求;缺失站点的邻居也可能不完整,从而导致估计不准确。下面,我想出了一个解决方案;它从输入最少不完整的列开始,(可选)为邻居强加一个安全的预定义距离。希望对您有所帮助。
function data = dnnimpute(data,distCutoff,option,distMetric)
% data = dnnimpute(data,distCutoff,option,distMetric)
%
% Distance-based nearest neighbor imputation that impose a distance
% cutoff to determine nearest neighbors, i.e., avoids those samples
% that are more distant than the distCutoff argument.
%
% Imputes missing data coded by "NaN" starting from the covarites
% (columns) with the least number of missing data. Then it continues by
% including more (complete) covariates in the calculation of pair-wise
% distances.
%
% option,
% 'median' - Median of the nearest neighboring values
% 'weighted' - Weighted average of the nearest neighboring values
% 'default' - Unweighted average of the nearest neighboring values
%
% distMetric,
% 'euclidean' - Euclidean distance (default)
% 'seuclidean' - Standardized Euclidean distance. Each coordinate
% difference between rows in X is scaled by dividing
% by the corresponding element of the standard
% deviation S=NANSTD(X). To specify another value for
% S, use D=pdist(X,'seuclidean',S).
% 'cityblock' - City Block distance
% 'minkowski' - Minkowski distance. The default exponent is 2. To
% specify a different exponent, use
% D = pdist(X,'minkowski',P), where the exponent P is
% a scalar positive value.
% 'chebychev' - Chebychev distance (maximum coordinate difference)
% 'mahalanobis' - Mahalanobis distance, using the sample covariance
% of X as computed by NANCOV. To compute the distance
% with a different covariance, use
% D = pdist(X,'mahalanobis',C), where the matrix C
% is symmetric and positive definite.
% 'cosine' - One minus the cosine of the included angle
% between observations (treated as vectors)
% 'correlation' - One minus the sample linear correlation between
% observations (treated as sequences of values).
% 'spearman' - One minus the sample Spearman's rank correlation
% between observations (treated as sequences of values).
% 'hamming' - Hamming distance, percentage of coordinates
% that differ
% 'jaccard' - One minus the Jaccard coefficient, the
% percentage of nonzero coordinates that differ
% function - A distance function specified using @, for
% example @DISTFUN.
%
if nargin < 3
option = 'mean';
end
if nargin < 4
distMetric = 'euclidean';
end
nanVals = isnan(data);
nanValsPerCov = sum(nanVals,1);
noNansCov = nanValsPerCov == 0;
if isempty(find(noNansCov, 1))
[~,leastNans] = min(nanValsPerCov);
noNansCov(leastNans) = true;
first = data(nanVals(:,noNansCov),:);
nanRows = find(nanVals(:,noNansCov)==true); i = 1;
for row = first'
data(nanRows(i),noNansCov) = mean(row(~isnan(row)));
i = i+1;
end
end
nSamples = size(data,1);
if nargin < 2
dataNoNans = data(:,noNansCov);
distances = pdist(dataNoNans);
distCutoff = min(distances);
end
[stdCovMissDat,idxCovMissDat] = sort(nanValsPerCov,'ascend');
imputeCols = idxCovMissDat(stdCovMissDat>0);
% Impute starting from the cols (covariates) with the least number of
% missing data.
for c = reshape(imputeCols,1,length(imputeCols))
imputeRows = 1:nSamples;
imputeRows = imputeRows(nanVals(:,c));
for r = reshape(imputeRows,1,length(imputeRows))
% Calculate distances
distR = inf(nSamples,1);
%
noNansCov_r = find(isnan(data(r,:))==0);
noNansCov_r = noNansCov_r(sum(isnan(data(nanVals(:,c)'==false,~isnan(data(r,:)))),1)==0);
%
for i = find(nanVals(:,c)'==false)
distR(i) = pdist([data(r,noNansCov_r); data(i,noNansCov_r)],distMetric);
end
tmp = min(distR(distR>0));
% Impute the missing data at sample r of covariate c
switch option
case 'weighted'
data(r,c) = (1./distR(distR<=max(distCutoff,tmp)))' * data(distR<=max(distCutoff,tmp),c) / sum(1./distR(distR<=max(distCutoff,tmp)));
case 'median'
data(r,c) = median(data(distR<=max(distCutoff,tmp),c),1);
case 'mean'
data(r,c) = mean(data(distR<=max(distCutoff,tmp),c),1);
end
% The missing data in sample r is imputed. Update the sample
% indices of c which are imputed.
nanVals(r,c) = false;
end
fprintf('%u/%u of the covariates are imputed.\n',find(c==imputeCols),length(imputeCols));
end
我正在处理患者信息的数据集,并尝试使用 MATLAB 从数据中计算倾向得分。删除具有许多缺失值的特征后,我仍然有几个缺失 (NaN) 值。
由于这些缺失值,当我尝试使用以下 Matlab 代码(来自 Andrew Ng 的 Coursera 机器学习 class) :
[m, n] = size(X);
X = [ones(m, 1) X];
initial_theta = ones(n+1, 1);
[cost, grad] = costFunction(initial_theta, X, y);
options = optimset('GradObj', 'on', 'MaxIter', 400);
[theta, cost] = ...
fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);
注意:sigmoid 和 costfunction 是我为整体易用性而创建的工作函数。
如果我将所有 NaN 值替换为 1 或 0,计算可以顺利进行。但是我不确定这是否是处理此问题的最佳方法,我也想知道我应该选择什么替换值(通常)以获得对缺失数据执行逻辑回归的最佳结果。 是否 benefits/drawbacks 使用特定数字(0 或 1 或其他数字)来替换我的数据中的上述缺失值?
注意:我也把所有的特征值归一化到0-1的范围内。
如果您对此问题有任何见解,我们将不胜感激。谢谢
要处理缺失数据,您可以使用以下三个选项之一:
如果缺失值的实例不多,可以删除缺失值的实例。
如果您有很多特征并且可以承受丢失一些信息,请删除具有缺失值的整个特征。
最好的方法是用一些值(均值、中位数)来代替缺失值。您可以计算该特征的其余训练示例的平均值,并用平均值填充所有缺失值。这非常有效,因为平均值保持在数据分布中。
注意:用均值替换缺失值时,仅使用训练集计算均值。此外,存储该值并使用它来更改测试集中的缺失值。
如果您使用 0 或 1 来替换所有缺失值,则数据可能会出现偏差,因此最好用所有其他值的平均值来替换缺失值。
如前所述,这是人们处理的一个普遍问题,与编程平台无关。它被称为"missing data imputation"。
将所有缺失值强制为特定数字当然有缺点。根据数据的分布,它可能会很激烈,例如,在零多于一的二进制稀疏数据中将所有缺失值设置为 1。
幸运的是,MATLAB 有一个名为 knnimpute
的函数,它可以根据最近的邻居来估计缺失的数据点。
根据我的经验,我经常发现 knnimpute
很有用。但是,当您的数据中缺少太多站点时,它可能会达不到要求;缺失站点的邻居也可能不完整,从而导致估计不准确。下面,我想出了一个解决方案;它从输入最少不完整的列开始,(可选)为邻居强加一个安全的预定义距离。希望对您有所帮助。
function data = dnnimpute(data,distCutoff,option,distMetric)
% data = dnnimpute(data,distCutoff,option,distMetric)
%
% Distance-based nearest neighbor imputation that impose a distance
% cutoff to determine nearest neighbors, i.e., avoids those samples
% that are more distant than the distCutoff argument.
%
% Imputes missing data coded by "NaN" starting from the covarites
% (columns) with the least number of missing data. Then it continues by
% including more (complete) covariates in the calculation of pair-wise
% distances.
%
% option,
% 'median' - Median of the nearest neighboring values
% 'weighted' - Weighted average of the nearest neighboring values
% 'default' - Unweighted average of the nearest neighboring values
%
% distMetric,
% 'euclidean' - Euclidean distance (default)
% 'seuclidean' - Standardized Euclidean distance. Each coordinate
% difference between rows in X is scaled by dividing
% by the corresponding element of the standard
% deviation S=NANSTD(X). To specify another value for
% S, use D=pdist(X,'seuclidean',S).
% 'cityblock' - City Block distance
% 'minkowski' - Minkowski distance. The default exponent is 2. To
% specify a different exponent, use
% D = pdist(X,'minkowski',P), where the exponent P is
% a scalar positive value.
% 'chebychev' - Chebychev distance (maximum coordinate difference)
% 'mahalanobis' - Mahalanobis distance, using the sample covariance
% of X as computed by NANCOV. To compute the distance
% with a different covariance, use
% D = pdist(X,'mahalanobis',C), where the matrix C
% is symmetric and positive definite.
% 'cosine' - One minus the cosine of the included angle
% between observations (treated as vectors)
% 'correlation' - One minus the sample linear correlation between
% observations (treated as sequences of values).
% 'spearman' - One minus the sample Spearman's rank correlation
% between observations (treated as sequences of values).
% 'hamming' - Hamming distance, percentage of coordinates
% that differ
% 'jaccard' - One minus the Jaccard coefficient, the
% percentage of nonzero coordinates that differ
% function - A distance function specified using @, for
% example @DISTFUN.
%
if nargin < 3
option = 'mean';
end
if nargin < 4
distMetric = 'euclidean';
end
nanVals = isnan(data);
nanValsPerCov = sum(nanVals,1);
noNansCov = nanValsPerCov == 0;
if isempty(find(noNansCov, 1))
[~,leastNans] = min(nanValsPerCov);
noNansCov(leastNans) = true;
first = data(nanVals(:,noNansCov),:);
nanRows = find(nanVals(:,noNansCov)==true); i = 1;
for row = first'
data(nanRows(i),noNansCov) = mean(row(~isnan(row)));
i = i+1;
end
end
nSamples = size(data,1);
if nargin < 2
dataNoNans = data(:,noNansCov);
distances = pdist(dataNoNans);
distCutoff = min(distances);
end
[stdCovMissDat,idxCovMissDat] = sort(nanValsPerCov,'ascend');
imputeCols = idxCovMissDat(stdCovMissDat>0);
% Impute starting from the cols (covariates) with the least number of
% missing data.
for c = reshape(imputeCols,1,length(imputeCols))
imputeRows = 1:nSamples;
imputeRows = imputeRows(nanVals(:,c));
for r = reshape(imputeRows,1,length(imputeRows))
% Calculate distances
distR = inf(nSamples,1);
%
noNansCov_r = find(isnan(data(r,:))==0);
noNansCov_r = noNansCov_r(sum(isnan(data(nanVals(:,c)'==false,~isnan(data(r,:)))),1)==0);
%
for i = find(nanVals(:,c)'==false)
distR(i) = pdist([data(r,noNansCov_r); data(i,noNansCov_r)],distMetric);
end
tmp = min(distR(distR>0));
% Impute the missing data at sample r of covariate c
switch option
case 'weighted'
data(r,c) = (1./distR(distR<=max(distCutoff,tmp)))' * data(distR<=max(distCutoff,tmp),c) / sum(1./distR(distR<=max(distCutoff,tmp)));
case 'median'
data(r,c) = median(data(distR<=max(distCutoff,tmp),c),1);
case 'mean'
data(r,c) = mean(data(distR<=max(distCutoff,tmp),c),1);
end
% The missing data in sample r is imputed. Update the sample
% indices of c which are imputed.
nanVals(r,c) = false;
end
fprintf('%u/%u of the covariates are imputed.\n',find(c==imputeCols),length(imputeCols));
end