识别 GCMLE 模型中所需的占位符
Identify Required Placeholders in GCMLE Model
几周前我部署了一个模型,但我忘记了模型需要哪些输入特征(是的,我知道我应该更好地跟踪这个)。当我运行下面的命令
gcloud ml-engine predict \
--model $MODEL_NAME \
--version v1 \
--json-instances \
../test.json
我得到以下错误:
{
"error": "Prediction failed: Exception during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details=\"input size does not match signature\")"
}
我知道我的问题是因为有一些 "required" 占位符我没有包含在我的 json 请求中 (test.json
) 但我想不出任何方法来追溯找出缺少哪个占位符。
问题与此 question 相同,发帖者在包含缺失的 "threshold" 张量后能够提交预测。
查看给定模型的预期输入的最简单方法是什么?
假设您有与保存的模型关联的张量流图。如果您可以打印图形(以文本格式打印 protobuffer)或使用工具进行可视化,则可以在图形中看到 "signature_def" 字段。那应该告诉您图形的输入和输出。您可以将您在请求中发送的张量与图形预期的输入进行比较。
该服务本身还不允许您查询模型的签名,因此我的建议是尽可能使用 saved_model_cli
(假设您没有删除原始模型)。类似于:
gcloud ml-engine versions describe v1 --model census | grep deploymentUri
这将输出如下内容:
deploymentUri: gs://my_bucket/path/to/Servo/1488268526779
现在,使用 saved_model_cli
:
saved_model_cli show --all --dir gs://my_bucket/path/to/Servo/1488268526779
这将输出如下内容:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['age'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: Placeholder_8:0
inputs['capital_gain'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: Placeholder_10:0
inputs['capital_loss'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: Placeholder_11:0
inputs['education'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_2:0
inputs['education_num'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: Placeholder_9:0
inputs['gender'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder:0
inputs['hours_per_week'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: Placeholder_12:0
inputs['marital_status'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_3:0
inputs['native_country'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_7:0
inputs['occupation'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_6:0
inputs['race'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_1:0
inputs['relationship'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_4:0
inputs['workclass'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_5:0
The given SavedModel SignatureDef contains the following output(s):
outputs['classes'] tensor_info:
dtype: DT_INT64
shape: (-1)
name: predictions/classes:0
outputs['logistic'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: predictions/logistic:0
outputs['logits'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: add:0
outputs['probabilities'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 2)
name: predictions/probabilities:0
Method name is: tensorflow/serving/predict
几周前我部署了一个模型,但我忘记了模型需要哪些输入特征(是的,我知道我应该更好地跟踪这个)。当我运行下面的命令
gcloud ml-engine predict \
--model $MODEL_NAME \
--version v1 \
--json-instances \
../test.json
我得到以下错误:
{
"error": "Prediction failed: Exception during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details=\"input size does not match signature\")"
}
我知道我的问题是因为有一些 "required" 占位符我没有包含在我的 json 请求中 (test.json
) 但我想不出任何方法来追溯找出缺少哪个占位符。
问题与此 question 相同,发帖者在包含缺失的 "threshold" 张量后能够提交预测。
查看给定模型的预期输入的最简单方法是什么?
假设您有与保存的模型关联的张量流图。如果您可以打印图形(以文本格式打印 protobuffer)或使用工具进行可视化,则可以在图形中看到 "signature_def" 字段。那应该告诉您图形的输入和输出。您可以将您在请求中发送的张量与图形预期的输入进行比较。
该服务本身还不允许您查询模型的签名,因此我的建议是尽可能使用 saved_model_cli
(假设您没有删除原始模型)。类似于:
gcloud ml-engine versions describe v1 --model census | grep deploymentUri
这将输出如下内容:
deploymentUri: gs://my_bucket/path/to/Servo/1488268526779
现在,使用 saved_model_cli
:
saved_model_cli show --all --dir gs://my_bucket/path/to/Servo/1488268526779
这将输出如下内容:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['age'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: Placeholder_8:0
inputs['capital_gain'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: Placeholder_10:0
inputs['capital_loss'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: Placeholder_11:0
inputs['education'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_2:0
inputs['education_num'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: Placeholder_9:0
inputs['gender'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder:0
inputs['hours_per_week'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: Placeholder_12:0
inputs['marital_status'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_3:0
inputs['native_country'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_7:0
inputs['occupation'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_6:0
inputs['race'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_1:0
inputs['relationship'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_4:0
inputs['workclass'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: Placeholder_5:0
The given SavedModel SignatureDef contains the following output(s):
outputs['classes'] tensor_info:
dtype: DT_INT64
shape: (-1)
name: predictions/classes:0
outputs['logistic'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: predictions/logistic:0
outputs['logits'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: add:0
outputs['probabilities'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 2)
name: predictions/probabilities:0
Method name is: tensorflow/serving/predict