launch_benchmark.py 英特尔模型动物园的 resnet34 失败
launch_benchmark.py of intel model zoo fails for resnet34
当 运行 launch_benchmark.py 在 intel model zoo github(https://github.com/IntelAI/models) 中使用以下参数时
python launch_benchmark.py --data-location /home/user/coco/output/ --in-graph /home/user/ssd_resnet34_fp32_bs1_pretrained_model.pb --model-source-dir /home/user/tensorflow/models --model-name ssd-resnet34 --framework tensorflow --precision fp32 --mode inference --socket-id 0 --batch-size=1 --docker-image gcr.io/deeplearning-platform-release/tf-cpu.1-14 --accuracy-only
我收到以下错误:
Inference for accuracy check.
Traceback (most recent call last):
File "/tmp/benchmarks/scripts/tf_cnn_benchmarks/models/ssd_model.py", line 507, in postprocess
import coco_metric # pylint: disable=g-import-not-at-top
File "/tmp/benchmarks/scripts/tf_cnn_benchmarks/coco_metric.py", line 32, in
from pycocotools.coco import COCO
File "/workspace/models/research/pycocotools/coco.py", line 55, in
from . import mask as maskUtils
File "/workspace/models/research/pycocotools/mask.py", line 3, in
import pycocotools._mask as _mask
ImportError: No module named 'pycocotools._mask'
The PYTHONPATH is :"/home/user/Tensorflowmodels/models/research:/home/user/Tensorflowmodels/models/research/slim"
/home/user/cocoapi/PythonAPI was compiled with python3.6 and pycocotools was copied to /home/user/Tensorflowmodels/models/research.
The /home/user/IntelModelsAI/benchmarks/launch_benchmark.py is also run with python3.6
有一个用于 SSD-ResNet34 FP32 推理的模型工作负载容器。
此容器包含 运行 模型所需的所有代码、依赖项 code/installs 和预训练模型。您只需要提供预处理的 COCO 数据集的路径和将写入日志文件的输出目录。该容器还包括用于常见用例的快速启动脚本。在您的情况下,您可以使用 fp32_accuracy.sh 脚本,该脚本使用与上述相同的参数(如批量大小 1、套接字 0 和仅精度)。
下面是容器如何用于 运行 SSD-ResNet32 精度测试的示例:
DATASET_DIR=/home/user/coco/outputOUTPUT_DIR=/home/user/logs
docker run \--env DATASET_DIR=${DATASET_DIR} \--env OUTPUT_DIR=${OUTPUT_DIR} \--volume ${DATASET_DIR}:${DATASET_DIR} \--volume ${OUTPUT_DIR}:${OUTPUT_DIR} \--privileged --init -t \intel/object-detection:tf-2.3.0-imz-2.2.0-ssd-resnet34-fp32-inference \/bin/bash quickstart/fp32_accuracy.sh
当 运行 launch_benchmark.py 在 intel model zoo github(https://github.com/IntelAI/models) 中使用以下参数时
python launch_benchmark.py --data-location /home/user/coco/output/ --in-graph /home/user/ssd_resnet34_fp32_bs1_pretrained_model.pb --model-source-dir /home/user/tensorflow/models --model-name ssd-resnet34 --framework tensorflow --precision fp32 --mode inference --socket-id 0 --batch-size=1 --docker-image gcr.io/deeplearning-platform-release/tf-cpu.1-14 --accuracy-only
我收到以下错误:
Inference for accuracy check.
Traceback (most recent call last):
File "/tmp/benchmarks/scripts/tf_cnn_benchmarks/models/ssd_model.py", line 507, in postprocess
import coco_metric # pylint: disable=g-import-not-at-top
File "/tmp/benchmarks/scripts/tf_cnn_benchmarks/coco_metric.py", line 32, in
from pycocotools.coco import COCO
File "/workspace/models/research/pycocotools/coco.py", line 55, in
from . import mask as maskUtils
File "/workspace/models/research/pycocotools/mask.py", line 3, in
import pycocotools._mask as _mask
ImportError: No module named 'pycocotools._mask'
The PYTHONPATH is :"/home/user/Tensorflowmodels/models/research:/home/user/Tensorflowmodels/models/research/slim"
/home/user/cocoapi/PythonAPI was compiled with python3.6 and pycocotools was copied to /home/user/Tensorflowmodels/models/research.
The /home/user/IntelModelsAI/benchmarks/launch_benchmark.py is also run with python3.6
有一个用于 SSD-ResNet34 FP32 推理的模型工作负载容器。
此容器包含 运行 模型所需的所有代码、依赖项 code/installs 和预训练模型。您只需要提供预处理的 COCO 数据集的路径和将写入日志文件的输出目录。该容器还包括用于常见用例的快速启动脚本。在您的情况下,您可以使用 fp32_accuracy.sh 脚本,该脚本使用与上述相同的参数(如批量大小 1、套接字 0 和仅精度)。
下面是容器如何用于 运行 SSD-ResNet32 精度测试的示例:
DATASET_DIR=/home/user/coco/outputOUTPUT_DIR=/home/user/logs
docker run \--env DATASET_DIR=${DATASET_DIR} \--env OUTPUT_DIR=${OUTPUT_DIR} \--volume ${DATASET_DIR}:${DATASET_DIR} \--volume ${OUTPUT_DIR}:${OUTPUT_DIR} \--privileged --init -t \intel/object-detection:tf-2.3.0-imz-2.2.0-ssd-resnet34-fp32-inference \/bin/bash quickstart/fp32_accuracy.sh