损失函数设计为假阳性和假阴性合并不同的权重
loss function design to incorporate different weight for false positive and false negative
我正在尝试解决语义分割问题。根据实际约束,假阳性和假阴性的标准是不同的。例如,如果一个像素被误校正为前景不如一个像素被误校正为背景。如何在设置损失函数时处理这种约束。
您可以使用 model.fit
的 class_weight
参数来衡量您的 class 的权重,因此,根据 class.
class_weight
: optional dictionary mapping class indices (integers) to a weight (float) to apply to the model's loss for the samples from this class during training. This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
例如:
out = Dense(2, activation='softmax')
model = Model(input=..., output=out)
model.fit(X, Y, class_weight={0: 1, 1: 0.5})
这会比第一个 class 少惩罚第二个。
查看 keras-contrib 中的 jaccard 距离(或 IOU)损失函数:
This loss is useful when you have unbalanced numbers of pixels within an image
because it gives all classes equal weight. However, it is not the defacto
standard for image segmentation.
For example, assume you are trying to predict if each pixel is cat, dog, or background.
You have 80% background pixels, 10% dog, and 10% cat. If the model predicts 100% background
should it be be 80% right (as with categorical cross entropy) or 30% (with this loss)?
来源:
https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/jaccard.py
我正在尝试解决语义分割问题。根据实际约束,假阳性和假阴性的标准是不同的。例如,如果一个像素被误校正为前景不如一个像素被误校正为背景。如何在设置损失函数时处理这种约束。
您可以使用 model.fit
的 class_weight
参数来衡量您的 class 的权重,因此,根据 class.
class_weight
: optional dictionary mapping class indices (integers) to a weight (float) to apply to the model's loss for the samples from this class during training. This can be useful to tell the model to "pay more attention" to samples from an under-represented class.
例如:
out = Dense(2, activation='softmax')
model = Model(input=..., output=out)
model.fit(X, Y, class_weight={0: 1, 1: 0.5})
这会比第一个 class 少惩罚第二个。
查看 keras-contrib 中的 jaccard 距离(或 IOU)损失函数:
This loss is useful when you have unbalanced numbers of pixels within an image because it gives all classes equal weight. However, it is not the defacto standard for image segmentation. For example, assume you are trying to predict if each pixel is cat, dog, or background. You have 80% background pixels, 10% dog, and 10% cat. If the model predicts 100% background should it be be 80% right (as with categorical cross entropy) or 30% (with this loss)?
来源: https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/jaccard.py