将保存的 tensorflow 模型转换为 tensorflow Lite 的正确方法是什么

What is the right way to convert saved tensorflow model to tensorflow Lite

我有一个已保存的 tensorflow 模型,与 model zoo 中的所有模型相同。

我想把它转换成tesorflow lite,我从tensorflow github(我的tensorflw版本是2)找到了下面的方法:

!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz 
# extract the downloaded file
!tar -xzvf ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.tar.gz
    
!pip install tf-nightly
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model('ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()

open("m.tflite", "wb").write(tflite_model)

但是转换后的模型的输出和输入形状与原始模型不匹配,请检查以下内容:

所以这里有问题!输入/输出形状应与原始模型匹配! 有什么想法吗?

模型输入和输出的形状应如下所示相同

如果模型已经是saved_model格式,你输入下面的代码

# if you are using same model
export_dir = 'ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model'
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)

如果您的模型是 Keras 格式,请使用以下格式

# if it's a keras model 
model = tf.keras.applications.MobileNetV2(weights="imagenet", input_shape= (224, 224, 3))
converter = tf.lite.TFLiteConverter.from_keras_model(model)

在这两种情况下,目的都是为了获得转换器。

我没有saved_model,所以我将使用keras模型并将其转换为saved_model格式,仅以Keras模型格式为例

import pathlib #to use path
model = tf.keras.applications.MobileNetV2(weights="imagenet", input_shape= (224, 224, 3))
export_dir = 'imagenet/saved_model'
tf.saved_model.save(model, export_dir) #convert keras to saved model

converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]  #you can also optimize for size or latency OPTIMIZE_FOR_SIZE, OPTIMIZE_FOR_LATENCY
tflite_model = converter.convert()

#save the model
tflite_model_file = pathlib.Path('m.tflite')
tflite_model_file.write_bytes(tflite_model)

tflite_interpreter = tf.lite.Interpreter(model_path= 'm.tflite') #you can load the content with model_content=tflite_model

# get shape of tflite input and output
input_details = tflite_interpreter.get_input_details()
output_details = tflite_interpreter.get_output_details()
print("Input: {}".format( input_details[0]['shape']))
print("Output:{}".format(output_details[0]['shape']))

# get shape of the origin model
print("Input:  {}".format( model.input.shape))
print("Output: {}".format(model.output.shape))

对于 tflite:我有这个

对于原始模型,我有这个

你会看到 tflitekeras 模型的形状是一样的

来自 Tensorflow github 个问题,我用他们的答案解决了我的问题。 Link

他们的方法:

!pip install tf-nightly
import tensorflow as tf

## TFLite Conversion
model = tf.saved_model.load("saved_model")
concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
concrete_func.inputs[0].set_shape([1, 300, 300, 3])
tf.saved_model.save(model, "saved_model_updated", signatures={"serving_default":concrete_func})
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir='saved_model_updated', signature_keys=['serving_default'])

converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()

## TFLite Interpreter to check input shape
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Test the model on random input data.
input_shape = input_details[0]['shape']
print(input_shape)

[ 1 300 300 3]

谢谢MeghnaNatraj

只需重塑您的输入张量即可。

你可以使用resize_tensor_input函数,像这样:

interpreter.resize_tensor_input(input_index=0, tensor_size=[1, 640, 640, 3])

现在您输入的形状将是:[1, 640, 640, 3]