节点号 X (RESHAPE) 准备失败。使用 tflite v2.2 调整张量大小
Node number X (RESHAPE) failed to prepare. Tensor resize with tflite v2.2
这是重现错误的简单代码:
import os
os.environ["CUDA_VISIBLE_DEVICES"]="-1"
import numpy as np
from keras.models import Sequential
from keras.layers import Conv1D, Flatten, Dense
import tensorflow as tf
model_path = 'test.h5'
model = Sequential()
model.add(Conv1D(8,(5,), input_shape=(100,1)))
model.add(Flatten())
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.save(model_path)
model = tf.keras.models.load_model(model_path, compile=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.resize_tensor_input(interpreter.get_input_details()[0]['index'], (2,100,1))
interpreter.resize_tensor_input(interpreter.get_output_details()[0]['index'], (2,1))
interpreter.allocate_tensors()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-ad8e2eea467f> in <module>
27 interpreter.resize_tensor_input(interpreter.get_output_details()[0]['index'], (2,1))
28
---> 29 interpreter.allocate_tensors()
<>/tensorflow/lite/python/interpreter.py in allocate_tensors(self)
240 def allocate_tensors(self):
241 self._ensure_safe()
--> 242 return self._interpreter.AllocateTensors()
243
244 def _safe_to_run(self):
<>/tensorflow/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py in AllocateTensors(self)
108
109 def AllocateTensors(self):
--> 110 return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
111
112 def Invoke(self):
RuntimeError: tensorflow/lite/kernels/reshape.cc:66 num_input_elements != num_output_elements (1536 != 768)Node number 3 (RESHAPE) failed to prepare.
问题似乎出在展平层的重塑函数上。我已经能够使用 tensorflow 1.5 执行这种调整大小,但不能使用 2.2 版本。
以下是整形层的信息:
{'name': 'sequential_1/flatten_1/Reshape',
'index': 8,
'shape': array([ 1, 768], dtype=int32),
'shape_signature': array([ 1, 768], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}},
我想也许我也应该调整这一层的大小,所以我添加了:
interpreter.resize_tensor_input(8, (2,768))
但我得到了完全相同的错误。
RuntimeError: tensorflow/lite/kernels/reshape.cc:66 num_input_elements != num_output_elements (1536 != 768)Node number 3 (RESHAPE) failed to prepare.
resize_tensor_input 将批量大小更改为 2,这似乎触发了此问题。
TFL 对 batch size = 1(大多数情况)有很好的支持,但更大的 batch size 支持偶尔会出现问题。在某些地方,这个批量大小参数可能会被忽略。您可以尝试使用 batch size = 1 看看是否有效吗?
当调用resize_tensor_input时,TFL会使用Prepare方法重新计算计算图中所有节点的input/output形状。对中间层的任何更改都将被覆盖,因此 interpreter.resize_tensor_input(8, (2,768))
没有帮助。
我想出了一个解决方法,在转换为 tflite 之前重塑模型,方法是重塑 keras 模型,然后将其转换为具体函数并使用 from_concrete_function 而不是 from_keras_model。
import os
os.environ["CUDA_VISIBLE_DEVICES"]="-1"
import numpy as np
from keras.models import Sequential
from keras.layers import Conv1D, Flatten, Dense
import tensorflow as tf
model_path = 'test.h5'
model = Sequential()
model.add(Conv1D(8,(5,), input_shape=(100,1)))
model.add(Flatten())
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.save(model_path)
model = tf.keras.models.load_model(model_path, compile=False)
batch_size = 2
input_shape = model.inputs[0].shape.as_list()
input_shape[0] = batch_size
func = tf.function(model).get_concrete_function(
tf.TensorSpec(input_shape, model.inputs[0].dtype))
converter = tf.lite.TFLiteConverter.from_concrete_functions([func])
tflite_model = converter.convert()
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
这是重现错误的简单代码:
import os
os.environ["CUDA_VISIBLE_DEVICES"]="-1"
import numpy as np
from keras.models import Sequential
from keras.layers import Conv1D, Flatten, Dense
import tensorflow as tf
model_path = 'test.h5'
model = Sequential()
model.add(Conv1D(8,(5,), input_shape=(100,1)))
model.add(Flatten())
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.save(model_path)
model = tf.keras.models.load_model(model_path, compile=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.resize_tensor_input(interpreter.get_input_details()[0]['index'], (2,100,1))
interpreter.resize_tensor_input(interpreter.get_output_details()[0]['index'], (2,1))
interpreter.allocate_tensors()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-ad8e2eea467f> in <module>
27 interpreter.resize_tensor_input(interpreter.get_output_details()[0]['index'], (2,1))
28
---> 29 interpreter.allocate_tensors()
<>/tensorflow/lite/python/interpreter.py in allocate_tensors(self)
240 def allocate_tensors(self):
241 self._ensure_safe()
--> 242 return self._interpreter.AllocateTensors()
243
244 def _safe_to_run(self):
<>/tensorflow/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py in AllocateTensors(self)
108
109 def AllocateTensors(self):
--> 110 return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
111
112 def Invoke(self):
RuntimeError: tensorflow/lite/kernels/reshape.cc:66 num_input_elements != num_output_elements (1536 != 768)Node number 3 (RESHAPE) failed to prepare.
问题似乎出在展平层的重塑函数上。我已经能够使用 tensorflow 1.5 执行这种调整大小,但不能使用 2.2 版本。
以下是整形层的信息:
{'name': 'sequential_1/flatten_1/Reshape',
'index': 8,
'shape': array([ 1, 768], dtype=int32),
'shape_signature': array([ 1, 768], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}},
我想也许我也应该调整这一层的大小,所以我添加了:
interpreter.resize_tensor_input(8, (2,768))
但我得到了完全相同的错误。
RuntimeError: tensorflow/lite/kernels/reshape.cc:66 num_input_elements != num_output_elements (1536 != 768)Node number 3 (RESHAPE) failed to prepare.
resize_tensor_input 将批量大小更改为 2,这似乎触发了此问题。
TFL 对 batch size = 1(大多数情况)有很好的支持,但更大的 batch size 支持偶尔会出现问题。在某些地方,这个批量大小参数可能会被忽略。您可以尝试使用 batch size = 1 看看是否有效吗?
当调用resize_tensor_input时,TFL会使用Prepare方法重新计算计算图中所有节点的input/output形状。对中间层的任何更改都将被覆盖,因此 interpreter.resize_tensor_input(8, (2,768))
没有帮助。
我想出了一个解决方法,在转换为 tflite 之前重塑模型,方法是重塑 keras 模型,然后将其转换为具体函数并使用 from_concrete_function 而不是 from_keras_model。
import os
os.environ["CUDA_VISIBLE_DEVICES"]="-1"
import numpy as np
from keras.models import Sequential
from keras.layers import Conv1D, Flatten, Dense
import tensorflow as tf
model_path = 'test.h5'
model = Sequential()
model.add(Conv1D(8,(5,), input_shape=(100,1)))
model.add(Flatten())
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.save(model_path)
model = tf.keras.models.load_model(model_path, compile=False)
batch_size = 2
input_shape = model.inputs[0].shape.as_list()
input_shape[0] = batch_size
func = tf.function(model).get_concrete_function(
tf.TensorSpec(input_shape, model.inputs[0].dtype))
converter = tf.lite.TFLiteConverter.from_concrete_functions([func])
tflite_model = converter.convert()
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()