Google CloudML serving_input_receiver_fn() b64 解码错误
Google CloudML serving_input_receiver_fn() b64 decode error
我正在通过 AJAX POST 将 base64 编码图像发送到存储在 Google CloudML 中的模型。我收到一条错误消息,告诉我我的 input_fn(): 无法解码图像并将其转换为 jpeg。
错误:
Prediction failed: Error during model execution:
AbortionError(code=StatusCode.INVALID_ARGUMENT,
details="Expected image (JPEG, PNG, or GIF), got
unknown format starting with 'u3Z2f0{0
1z[=12=]621607' [[{{node map/while
/DecodeJpeg}} = DecodeJpeg[_output_shapes=
[[?,?,3]], acceptable_fraction=1, channels=3,
dct_method="", fancy_upscaling=true, ratio=1,
try_recover_truncated=false,
_device="/job:localhost/replica:0 /task:0
/device:CPU:0"](map/while/TensorArrayReadV3)]]")
下面是完整的 Serving_input_receiver_fn():
我认为第一步是处理传入的 b64 编码字符串并对其进行解码。这是通过以下方式完成的:
image = tensorflow.io.decode_base64(image_str_tensor)
我认为下一步是打开字节,但这是我不知道如何使用 tensorflow 代码处理解码的 b64 字符串的地方,需要帮助。
使用 python Flask 应用程序可以通过以下方式完成:
image = Image.open(io.BytesIO(decoded))
- 将字节传递给
tf.image.decode_jpeg
????? 解码
image = tensorflow.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
完整 input_fn(): 代码
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tensorflow.io.decode_base64(image_str_tensor)
image = tensorflow.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
image = tensorflow.expand_dims(image, 0) image = tensorflow.image.resize_bilinear(image, [HEIGHT, WIDTH], align_corners=False)
image = tensorflow.squeeze(image, axis=[0])
image = tensorflow.cast(image, dtype=tensorflow.uint8)
return image
如何将我的 b64 字符串解码回 jpeg,然后将 jpeg 转换为张量?
这是一个用于处理 b64 图像的示例。
HEIGHT = 224
WIDTH = 224
CHANNELS = 3
IMAGE_SHAPE = (HEIGHT, WIDTH)
version = 'v1'
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
return image_preprocessing(image)
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.uint8)
images_tensor = tf.image.convert_image_dtype(images_tensor, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{'input': images_tensor},
{'image_bytes': input_ph})
export_path = os.path.join('/tmp/models/json_b64', version)
if os.path.exists(export_path): # clean up old exports with this version
shutil.rmtree(export_path)
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)
我正在通过 AJAX POST 将 base64 编码图像发送到存储在 Google CloudML 中的模型。我收到一条错误消息,告诉我我的 input_fn(): 无法解码图像并将其转换为 jpeg。
错误:
Prediction failed: Error during model execution:
AbortionError(code=StatusCode.INVALID_ARGUMENT,
details="Expected image (JPEG, PNG, or GIF), got
unknown format starting with 'u3Z2f0{0
1z[=12=]621607' [[{{node map/while
/DecodeJpeg}} = DecodeJpeg[_output_shapes=
[[?,?,3]], acceptable_fraction=1, channels=3,
dct_method="", fancy_upscaling=true, ratio=1,
try_recover_truncated=false,
_device="/job:localhost/replica:0 /task:0
/device:CPU:0"](map/while/TensorArrayReadV3)]]")
下面是完整的 Serving_input_receiver_fn():
我认为第一步是处理传入的 b64 编码字符串并对其进行解码。这是通过以下方式完成的:
image = tensorflow.io.decode_base64(image_str_tensor)
我认为下一步是打开字节,但这是我不知道如何使用 tensorflow 代码处理解码的 b64 字符串的地方,需要帮助。
使用 python Flask 应用程序可以通过以下方式完成:
image = Image.open(io.BytesIO(decoded))
- 将字节传递给
tf.image.decode_jpeg
????? 解码
image = tensorflow.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
完整 input_fn(): 代码
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tensorflow.io.decode_base64(image_str_tensor)
image = tensorflow.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
image = tensorflow.expand_dims(image, 0) image = tensorflow.image.resize_bilinear(image, [HEIGHT, WIDTH], align_corners=False)
image = tensorflow.squeeze(image, axis=[0])
image = tensorflow.cast(image, dtype=tensorflow.uint8)
return image
如何将我的 b64 字符串解码回 jpeg,然后将 jpeg 转换为张量?
这是一个用于处理 b64 图像的示例。
HEIGHT = 224
WIDTH = 224
CHANNELS = 3
IMAGE_SHAPE = (HEIGHT, WIDTH)
version = 'v1'
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
return image_preprocessing(image)
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.uint8)
images_tensor = tf.image.convert_image_dtype(images_tensor, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{'input': images_tensor},
{'image_bytes': input_ph})
export_path = os.path.join('/tmp/models/json_b64', version)
if os.path.exists(export_path): # clean up old exports with this version
shutil.rmtree(export_path)
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)