儿童数据库的 Dlib 培训
Dlib training for kids database
我正在尝试使用 Dlib 训练人脸检测器。我选择了近 1000 张图像进行训练。根据文档,我使用该图像创建了 training_with_face_landmarks.xml
。但是,我不明白,
testing_with_face_landmarks.xml
文件使用的图像是什么?
training_with_face_landmarks.xml
和 testing_with_face_landmarks.xml
文件都使用相同的图像?
提前致谢。
根据dlib参考:
- Using training images that don't look like the testing images
This should be obvious, but needs to be pointed out. If there is some
clear difference between your training and testing images then you
have messed up. You need to show the training algorithm real images so
it can learn what to do. If instead you only show it images that look
obviously different from your testing images don't be surprised if,
when you run the detector on the testing images, it doesn't work. As a
rule of thumb, a human should not be able to tell if an image came
from the training dataset or testing dataset.
Here are some examples of bad datasets:
A training dataset where objects always appear with some specific orientation but the testing images have a diverse set of orientations.
A training dataset where objects are tightly cropped, but testing images that are uncropped.
A training dataset where objects appear only on a perfectly white background with nothing else present, but testing images where objects
appear in a normal environment like living rooms or in natural scenes.
所以,不要使用训练中使用的图像。使用不同的图像进行测试。
我正在尝试使用 Dlib 训练人脸检测器。我选择了近 1000 张图像进行训练。根据文档,我使用该图像创建了 training_with_face_landmarks.xml
。但是,我不明白,
testing_with_face_landmarks.xml
文件使用的图像是什么?training_with_face_landmarks.xml
和testing_with_face_landmarks.xml
文件都使用相同的图像?
提前致谢。
根据dlib参考:
- Using training images that don't look like the testing images
This should be obvious, but needs to be pointed out. If there is some clear difference between your training and testing images then you have messed up. You need to show the training algorithm real images so it can learn what to do. If instead you only show it images that look obviously different from your testing images don't be surprised if, when you run the detector on the testing images, it doesn't work. As a rule of thumb, a human should not be able to tell if an image came from the training dataset or testing dataset.
Here are some examples of bad datasets:
A training dataset where objects always appear with some specific orientation but the testing images have a diverse set of orientations.
A training dataset where objects are tightly cropped, but testing images that are uncropped.
A training dataset where objects appear only on a perfectly white background with nothing else present, but testing images where objects appear in a normal environment like living rooms or in natural scenes.
所以,不要使用训练中使用的图像。使用不同的图像进行测试。