如何比较两个 AWS Rekognition 集合?
How can I compare two AWS Rekognition collections?
我有两张图片,里面有 40 多张人脸。我想使用 AWS Rekognition 服务检测两张图像中重复了哪些面孔。
最初的做法是使用 Rekognition 的 IndexFaces
功能,将一张图像的所有人脸存储在一个集合中,将另一张图像的人脸存储在另一个集合中,然后使用 FaceId
.我认为 IndexFaces
会为每张脸提供指纹,但恰好 FaceId
只是一个随机标识符,而不是面部指纹。
我找到了这个答案 How to compare faces in a Collection to faces in a Stored Video using AWS Rekognition? 但是它将集合中的所有面孔与视频中出现的面孔进行了比较,因此我将被迫将其中一张图像转换为包含图像的 1 秒视频只有框架..我认为这违背了易于使用的目的。
它必须是一种比较两个 rekognition 集合的方法,以便检查我找不到的重复图像。
您可以通过两种方式解决此问题:
选项 1:使用 ExternalImageID
这与您的方法类似。
重要的是,当一张脸被添加到一个集合中时,您可以提供一个 ExternalImageID
。稍后,当这张脸与图像匹配时,Amazon Rekognition 将为该脸 return ExternalImageID
。
例如,您可以将人名或唯一标识符存储在 ExternalImageID
。
因此,您的流程可能如下所示:
- 在图像 1
上调用 DetectFaces()
- 它将 return 一个
FaceDetails
的列表,每个面都有一个边界框
- 遍历每个 returned 面孔并使用提供的边界框为每个 个人 面孔调用
IndexFaces()
,提供 ExternalImageID
每次(它可能只是一个递增的数字)
- 然后,在图像 2
上调用 IndexFaces()
- 如果它在图像 1 生成的集合中找到任何面孔,它将提供匹配面孔的
ExternalImageID
选项 2:使用 CompareFaces()
Compares a face in the source input image with each of the 100 largest faces detected in the target input image.
这需要一个输入面孔(源图像中最大的面孔)并将其与目标图像中的所有面孔进行比较。因此,您将遵循与上述类似的过程:
- 在图像 1
上调用 DetectFaces()
- 它将 return 一个
FaceDetails
的列表,每个面都有一个边界框
- 遍历每个 returned 面孔并使用提供的边界框为每个 个人 面孔调用
CompareFaces()
,将其与图像 2
- 您将获得每张可能匹配的面孔的置信度
参见:Comparing Faces in Images - Amazon Rekognition
因此,如果您只是比较两个图像,则第二种方法更容易。如果您已经存储了希望在以后的调用中再次使用的个人面孔,第一种方法更好。
感谢@John Rotenstein,我能够使用 was 控制台快速制作原型:
假设我们在系统上安装了所有权限和 AWS 控制台以及存储所有图像的名为 'TestBucket' 的 S3 存储桶,我执行了以下操作:
1.- 创建了一个 "Main Collection"
> aws rekognition create-collection --collection-id "MainCollection"
2.- 添加了一个我想检测的人,我从一张人脸中提取了人脸 运行 IndexFace
> aws rekognition index-faces --image '{"S3Object":{"Bucket":"TestBucket","Name":"cristian.jpg"}}' --collection-id "MainCollection" --max-faces 100 --quality-filter "AUTO" --detection-attributes "ALL" --external-image-id "cristian.jpg"
得到的 FaceID 是 'a54ef57e-7003-4721-b7e1-703d9f039da9'
3.- 我将第二张图片添加到集合中:
> aws rekognition index-faces --image '{"S3Object":{"Bucket":"TestBucket","Name":"ImageContaining40plusfaces.jpg"}}' --collection-id "MainCollection" --max-faces 100 --quality-filter "AUTO" --detection-attributes "ALL" --external-image-id "ImageContaining40plusfaces.jpg"
产生了 40 多个这样的条目,为简洁起见只显示了一个:
{
"FaceRecords": [
{
"FaceDetail": {
"Confidence": 99.99859619140625,
"Eyeglasses": {
"Confidence": 54.99907684326172,
"Value": false
},
"Sunglasses": {
"Confidence": 54.99971389770508,
"Value": false
},
"Gender": {
"Confidence": 54.747318267822266,
"Value": "Male"
},
"Landmarks": [
{
"Y": 0.311367392539978,
"X": 0.1916557103395462,
"Type": "eyeLeft"
},
{
"Y": 0.3120582699775696,
"X": 0.20143891870975494,
"Type": "eyeRight"
},
{
"Y": 0.3355730175971985,
"X": 0.19253292679786682,
"Type": "mouthLeft"
},
{
"Y": 0.3361922800540924,
"X": 0.2005564421415329,
"Type": "mouthRight"
},
{
"Y": 0.32276451587677,
"X": 0.19691102206707,
"Type": "nose"
},
{
"Y": 0.30642834305763245,
"X": 0.1876278519630432,
"Type": "leftEyeBrowLeft"
},
{
"Y": 0.3037400245666504,
"X": 0.19379760324954987,
"Type": "leftEyeBrowRight"
},
{
"Y": 0.3029193580150604,
"X": 0.19078010320663452,
"Type": "leftEyeBrowUp"
},
{
"Y": 0.3041592836380005,
"X": 0.1995924860239029,
"Type": "rightEyeBrowLeft"
},
{
"Y": 0.3074571192264557,
"X": 0.20519918203353882,
"Type": "rightEyeBrowRight"
},
{
"Y": 0.30346789956092834,
"X": 0.2024637758731842,
"Type": "rightEyeBrowUp"
},
{
"Y": 0.3115418553352356,
"X": 0.1898096352815628,
"Type": "leftEyeLeft"
},
{
"Y": 0.3118479251861572,
"X": 0.1935078650712967,
"Type": "leftEyeRight"
},
{
"Y": 0.31028062105178833,
"X": 0.19159308075904846,
"Type": "leftEyeUp"
},
{
"Y": 0.31250447034835815,
"X": 0.19164365530014038,
"Type": "leftEyeDown"
},
{
"Y": 0.31221893429756165,
"X": 0.19937492907047272,
"Type": "rightEyeLeft"
},
{
"Y": 0.3123391270637512,
"X": 0.20295380055904388,
"Type": "rightEyeRight"
},
{
"Y": 0.31087613105773926,
"X": 0.2013435810804367,
"Type": "rightEyeUp"
},
{
"Y": 0.31308478116989136,
"X": 0.20125225186347961,
"Type": "rightEyeDown"
},
{
"Y": 0.3264555335044861,
"X": 0.19483911991119385,
"Type": "noseLeft"
},
{
"Y": 0.3265785574913025,
"X": 0.19839303195476532,
"Type": "noseRight"
},
{
"Y": 0.3319154679775238,
"X": 0.196599081158638,
"Type": "mouthUp"
},
{
"Y": 0.3392537832260132,
"X": 0.19649912416934967,
"Type": "mouthDown"
},
{
"Y": 0.311367392539978,
"X": 0.1916557103395462,
"Type": "leftPupil"
},
{
"Y": 0.3120582699775696,
"X": 0.20143891870975494,
"Type": "rightPupil"
},
{
"Y": 0.31476160883903503,
"X": 0.18458032608032227,
"Type": "upperJawlineLeft"
},
{
"Y": 0.3398161828517914,
"X": 0.18679481744766235,
"Type": "midJawlineLeft"
},
{
"Y": 0.35216856002807617,
"X": 0.19623762369155884,
"Type": "chinBottom"
},
{
"Y": 0.34082692861557007,
"X": 0.2045571506023407,
"Type": "midJawlineRight"
},
{
"Y": 0.3160339295864105,
"X": 0.20668834447860718,
"Type": "upperJawlineRight"
}
],
"Pose": {
"Yaw": 4.778820514678955,
"Roll": 1.7387386560440063,
"Pitch": 11.82911205291748
},
"Emotions": [
{
"Confidence": 47.9405403137207,
"Type": "CALM"
},
{
"Confidence": 45.432857513427734,
"Type": "ANGRY"
},
{
"Confidence": 45.953487396240234,
"Type": "HAPPY"
},
{
"Confidence": 45.215728759765625,
"Type": "SURPRISED"
},
{
"Confidence": 50.013206481933594,
"Type": "SAD"
},
{
"Confidence": 45.30225372314453,
"Type": "CONFUSED"
},
{
"Confidence": 45.14192199707031,
"Type": "DISGUSTED"
}
],
"AgeRange": {
"High": 43,
"Low": 26
},
"EyesOpen": {
"Confidence": 54.95812225341797,
"Value": true
},
"BoundingBox": {
"Width": 0.02271346002817154,
"Top": 0.28692546486854553,
"Left": 0.1841897815465927,
"Height": 0.06893482059240341
},
"Smile": {
"Confidence": 53.493797302246094,
"Value": false
},
"MouthOpen": {
"Confidence": 53.51670837402344,
"Value": false
},
"Quality": {
"Sharpness": 53.330047607421875,
"Brightness": 81.31917572021484
},
"Mustache": {
"Confidence": 54.971839904785156,
"Value": false
},
"Beard": {
"Confidence": 54.136474609375,
"Value": false
}
},
"Face": {
"BoundingBox": {
"Width": 0.02271346002817154,
"Top": 0.28692546486854553,
"Left": 0.1841897815465927,
"Height": 0.06893482059240341
},
"FaceId": "570eb8a6-72b8-4381-a1a2-9112aa2b348e",
"ExternalImageId": "ImageContaining40plusfaces.jpg",
"Confidence": 99.99859619140625,
"ImageId": "7f09400e-2de8-3d11-af05-223f13f9ef76"
}
}
]
}
3.- 然后我使用之前检测到的 FaceId 发出了 SearchFacesById
:
> aws rekognition search-faces --face-id "a54ef57e-7003-4721-b7e1-703d9f039da9" --collection-id "MainCollection"
瞧!我根据需要在第二个源图像上检测到了人脸...
{
"SearchedFaceId": "a54ef57e-7003-4721-b7e1-703d9f039da9",
"FaceModelVersion": "4.0",
"FaceMatches": [
{
"Face": {
"BoundingBox": {
"Width": 0.022825799882411957,
"Top": 0.31017398834228516,
"Left": 0.4018920063972473,
"Height": 0.06067270040512085
},
"FaceId": "bfd58e70-2bcf-403a-87da-6137c28ccbdd",
"ExternalImageId": "ImageContaining40plusfaces.jpg",
"Confidence": 100.0,
"ImageId": "7f09400e-2de8-3d11-af05-223f13f9ef76"
},
"Similarity": 92.36637115478516
}
]
}
所以现在我必须对源图像 nº1 中检测到的所有其他人脸图像执行相同的操作,然后使用同一组命令将它们与源图像 nº2 中检测到的人脸图像进行比较!
我有两张图片,里面有 40 多张人脸。我想使用 AWS Rekognition 服务检测两张图像中重复了哪些面孔。
最初的做法是使用 Rekognition 的 IndexFaces
功能,将一张图像的所有人脸存储在一个集合中,将另一张图像的人脸存储在另一个集合中,然后使用 FaceId
.我认为 IndexFaces
会为每张脸提供指纹,但恰好 FaceId
只是一个随机标识符,而不是面部指纹。
我找到了这个答案 How to compare faces in a Collection to faces in a Stored Video using AWS Rekognition? 但是它将集合中的所有面孔与视频中出现的面孔进行了比较,因此我将被迫将其中一张图像转换为包含图像的 1 秒视频只有框架..我认为这违背了易于使用的目的。
它必须是一种比较两个 rekognition 集合的方法,以便检查我找不到的重复图像。
您可以通过两种方式解决此问题:
选项 1:使用 ExternalImageID
这与您的方法类似。
重要的是,当一张脸被添加到一个集合中时,您可以提供一个 ExternalImageID
。稍后,当这张脸与图像匹配时,Amazon Rekognition 将为该脸 return ExternalImageID
。
例如,您可以将人名或唯一标识符存储在 ExternalImageID
。
因此,您的流程可能如下所示:
- 在图像 1 上调用
- 它将 return 一个
FaceDetails
的列表,每个面都有一个边界框 - 遍历每个 returned 面孔并使用提供的边界框为每个 个人 面孔调用
IndexFaces()
,提供ExternalImageID
每次(它可能只是一个递增的数字) - 然后,在图像 2 上调用
- 如果它在图像 1 生成的集合中找到任何面孔,它将提供匹配面孔的
ExternalImageID
DetectFaces()
IndexFaces()
选项 2:使用 CompareFaces()
Compares a face in the source input image with each of the 100 largest faces detected in the target input image.
这需要一个输入面孔(源图像中最大的面孔)并将其与目标图像中的所有面孔进行比较。因此,您将遵循与上述类似的过程:
- 在图像 1 上调用
- 它将 return 一个
FaceDetails
的列表,每个面都有一个边界框 - 遍历每个 returned 面孔并使用提供的边界框为每个 个人 面孔调用
CompareFaces()
,将其与图像 2 - 您将获得每张可能匹配的面孔的置信度
DetectFaces()
参见:Comparing Faces in Images - Amazon Rekognition
因此,如果您只是比较两个图像,则第二种方法更容易。如果您已经存储了希望在以后的调用中再次使用的个人面孔,第一种方法更好。
感谢@John Rotenstein,我能够使用 was 控制台快速制作原型:
假设我们在系统上安装了所有权限和 AWS 控制台以及存储所有图像的名为 'TestBucket' 的 S3 存储桶,我执行了以下操作:
1.- 创建了一个 "Main Collection"
> aws rekognition create-collection --collection-id "MainCollection"
2.- 添加了一个我想检测的人,我从一张人脸中提取了人脸 运行 IndexFace
> aws rekognition index-faces --image '{"S3Object":{"Bucket":"TestBucket","Name":"cristian.jpg"}}' --collection-id "MainCollection" --max-faces 100 --quality-filter "AUTO" --detection-attributes "ALL" --external-image-id "cristian.jpg"
得到的 FaceID 是 'a54ef57e-7003-4721-b7e1-703d9f039da9'
3.- 我将第二张图片添加到集合中:
> aws rekognition index-faces --image '{"S3Object":{"Bucket":"TestBucket","Name":"ImageContaining40plusfaces.jpg"}}' --collection-id "MainCollection" --max-faces 100 --quality-filter "AUTO" --detection-attributes "ALL" --external-image-id "ImageContaining40plusfaces.jpg"
产生了 40 多个这样的条目,为简洁起见只显示了一个:
{
"FaceRecords": [
{
"FaceDetail": {
"Confidence": 99.99859619140625,
"Eyeglasses": {
"Confidence": 54.99907684326172,
"Value": false
},
"Sunglasses": {
"Confidence": 54.99971389770508,
"Value": false
},
"Gender": {
"Confidence": 54.747318267822266,
"Value": "Male"
},
"Landmarks": [
{
"Y": 0.311367392539978,
"X": 0.1916557103395462,
"Type": "eyeLeft"
},
{
"Y": 0.3120582699775696,
"X": 0.20143891870975494,
"Type": "eyeRight"
},
{
"Y": 0.3355730175971985,
"X": 0.19253292679786682,
"Type": "mouthLeft"
},
{
"Y": 0.3361922800540924,
"X": 0.2005564421415329,
"Type": "mouthRight"
},
{
"Y": 0.32276451587677,
"X": 0.19691102206707,
"Type": "nose"
},
{
"Y": 0.30642834305763245,
"X": 0.1876278519630432,
"Type": "leftEyeBrowLeft"
},
{
"Y": 0.3037400245666504,
"X": 0.19379760324954987,
"Type": "leftEyeBrowRight"
},
{
"Y": 0.3029193580150604,
"X": 0.19078010320663452,
"Type": "leftEyeBrowUp"
},
{
"Y": 0.3041592836380005,
"X": 0.1995924860239029,
"Type": "rightEyeBrowLeft"
},
{
"Y": 0.3074571192264557,
"X": 0.20519918203353882,
"Type": "rightEyeBrowRight"
},
{
"Y": 0.30346789956092834,
"X": 0.2024637758731842,
"Type": "rightEyeBrowUp"
},
{
"Y": 0.3115418553352356,
"X": 0.1898096352815628,
"Type": "leftEyeLeft"
},
{
"Y": 0.3118479251861572,
"X": 0.1935078650712967,
"Type": "leftEyeRight"
},
{
"Y": 0.31028062105178833,
"X": 0.19159308075904846,
"Type": "leftEyeUp"
},
{
"Y": 0.31250447034835815,
"X": 0.19164365530014038,
"Type": "leftEyeDown"
},
{
"Y": 0.31221893429756165,
"X": 0.19937492907047272,
"Type": "rightEyeLeft"
},
{
"Y": 0.3123391270637512,
"X": 0.20295380055904388,
"Type": "rightEyeRight"
},
{
"Y": 0.31087613105773926,
"X": 0.2013435810804367,
"Type": "rightEyeUp"
},
{
"Y": 0.31308478116989136,
"X": 0.20125225186347961,
"Type": "rightEyeDown"
},
{
"Y": 0.3264555335044861,
"X": 0.19483911991119385,
"Type": "noseLeft"
},
{
"Y": 0.3265785574913025,
"X": 0.19839303195476532,
"Type": "noseRight"
},
{
"Y": 0.3319154679775238,
"X": 0.196599081158638,
"Type": "mouthUp"
},
{
"Y": 0.3392537832260132,
"X": 0.19649912416934967,
"Type": "mouthDown"
},
{
"Y": 0.311367392539978,
"X": 0.1916557103395462,
"Type": "leftPupil"
},
{
"Y": 0.3120582699775696,
"X": 0.20143891870975494,
"Type": "rightPupil"
},
{
"Y": 0.31476160883903503,
"X": 0.18458032608032227,
"Type": "upperJawlineLeft"
},
{
"Y": 0.3398161828517914,
"X": 0.18679481744766235,
"Type": "midJawlineLeft"
},
{
"Y": 0.35216856002807617,
"X": 0.19623762369155884,
"Type": "chinBottom"
},
{
"Y": 0.34082692861557007,
"X": 0.2045571506023407,
"Type": "midJawlineRight"
},
{
"Y": 0.3160339295864105,
"X": 0.20668834447860718,
"Type": "upperJawlineRight"
}
],
"Pose": {
"Yaw": 4.778820514678955,
"Roll": 1.7387386560440063,
"Pitch": 11.82911205291748
},
"Emotions": [
{
"Confidence": 47.9405403137207,
"Type": "CALM"
},
{
"Confidence": 45.432857513427734,
"Type": "ANGRY"
},
{
"Confidence": 45.953487396240234,
"Type": "HAPPY"
},
{
"Confidence": 45.215728759765625,
"Type": "SURPRISED"
},
{
"Confidence": 50.013206481933594,
"Type": "SAD"
},
{
"Confidence": 45.30225372314453,
"Type": "CONFUSED"
},
{
"Confidence": 45.14192199707031,
"Type": "DISGUSTED"
}
],
"AgeRange": {
"High": 43,
"Low": 26
},
"EyesOpen": {
"Confidence": 54.95812225341797,
"Value": true
},
"BoundingBox": {
"Width": 0.02271346002817154,
"Top": 0.28692546486854553,
"Left": 0.1841897815465927,
"Height": 0.06893482059240341
},
"Smile": {
"Confidence": 53.493797302246094,
"Value": false
},
"MouthOpen": {
"Confidence": 53.51670837402344,
"Value": false
},
"Quality": {
"Sharpness": 53.330047607421875,
"Brightness": 81.31917572021484
},
"Mustache": {
"Confidence": 54.971839904785156,
"Value": false
},
"Beard": {
"Confidence": 54.136474609375,
"Value": false
}
},
"Face": {
"BoundingBox": {
"Width": 0.02271346002817154,
"Top": 0.28692546486854553,
"Left": 0.1841897815465927,
"Height": 0.06893482059240341
},
"FaceId": "570eb8a6-72b8-4381-a1a2-9112aa2b348e",
"ExternalImageId": "ImageContaining40plusfaces.jpg",
"Confidence": 99.99859619140625,
"ImageId": "7f09400e-2de8-3d11-af05-223f13f9ef76"
}
}
]
}
3.- 然后我使用之前检测到的 FaceId 发出了 SearchFacesById
:
> aws rekognition search-faces --face-id "a54ef57e-7003-4721-b7e1-703d9f039da9" --collection-id "MainCollection"
瞧!我根据需要在第二个源图像上检测到了人脸...
{
"SearchedFaceId": "a54ef57e-7003-4721-b7e1-703d9f039da9",
"FaceModelVersion": "4.0",
"FaceMatches": [
{
"Face": {
"BoundingBox": {
"Width": 0.022825799882411957,
"Top": 0.31017398834228516,
"Left": 0.4018920063972473,
"Height": 0.06067270040512085
},
"FaceId": "bfd58e70-2bcf-403a-87da-6137c28ccbdd",
"ExternalImageId": "ImageContaining40plusfaces.jpg",
"Confidence": 100.0,
"ImageId": "7f09400e-2de8-3d11-af05-223f13f9ef76"
},
"Similarity": 92.36637115478516
}
]
}
所以现在我必须对源图像 nº1 中检测到的所有其他人脸图像执行相同的操作,然后使用同一组命令将它们与源图像 nº2 中检测到的人脸图像进行比较!