Tensorflow(Flask+Python):如何将Symbolic Tensor转换为ndarray
Tensorflow (Flask+Python): How to convert Symbolic Tensor to ndarray
初学者到 tensorflow
(使用 1.15.x
版本和 flask
)。
我已经构建了我的 object detector
(使用来自 TensorFlow 的对象检测-API,并在本地提取 inference_graph
和所有 ckpoint
文件)。
现在我想开始一个 flask API
,我想在其中使用 request.files.getlist
函数并在 main
脚本中 运行 推理(过程类似this project,主要脚本在app.py
)。
我的方法与链接方法之间的一个区别是我没有使用 yolo
并且我试图在主函数中写下所有必要的变量。
这是我的代码:
#list of imported packages (..)
# customize your API through the following parameters
MODEL_NAME = './inference_graph' # directory within the frozen graph (obj detector) inside
# Path to frozen detection graph .pb file, which contains the model that is used for object detection.
PATH_TO_CKPT = os.path.join(MODEL_NAME,'frozen_inference_graph.pb')
# Path to label map file
PATH_TO_LABELS = './training/labelmap.pbtxt'
# Path to test image folder (here i upload a test set folder into object det folder)
PATH_TEST_IMAGE = './Test_Folder_Inference' # sample folder to test (with 4 images) the API
# Number of classes the object detector can identify
NUM_CLASSES = 1
# load LABEL MAP vars
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
# load the TF model into "memory"
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: # 'rb' =read binary
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
inf_sess = tf.Session(
graph=detection_graph) # initialize the var for the "session" of the graph (session runs the graph operations)
# Define input and output Tensors (variables) for the graph (detection_graph)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
# So output tensors are the detection boxes, scores and classes
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
# Number of objects detected
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Initialize Flask application
app = Flask(__name__)
# API that returns JSON with classes found in images
@app.route('/detections', methods=['POST']) # app route/endpoint + spec the method [POST in this case]
def get_detections(): # define the function
raw_images = [] # create a list to store/append the req. images
images = request.files.getlist('images') ##request fx from Flask
image_names = []
print(len(images)) # just a check to see if images are 'processed' within the request.files.getlist
for image in images:
image_name = image.filename
image_names.append(image_name)
image.save(os.path.join(PATH_TEST_IMAGE, image_name))
img_raw = tf.image.decode_image(
open(image_name, 'rb').read(), channels=3) # decoding of file/img
raw_images.append(img_raw) # append (final list)
num = 0
# create list for final response
response = []
for j in range(len(image_names)): # potrei inserire una print sulla len (per vedere se è 'nulla'..)
# create list of responses for current image
raw_img = raw_images[j] #here, every single raw_img is a "tensor"
num += 1
image_expanded = np.expand_dims(raw_img, axis=0) #expand the batch dim/shape (but the
# Perform the actual detection by running the model with the image_exp as input (boxes can be deleted/unused var)
(boxes, scores, classes, num) = inf_sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_expanded})
##and then the script continues (but the only issue is up here)
当我 运行 这个脚本(通过命令提示符使用 curl)时 returns 出现以下错误:
Cannot convert a symbolic Tensor (decode_image/cond_jpeg/Merge:0) to a numpy array
我尝试将 force/convert
作为 np.array
和 image_expanded
,但不起作用(尝试了一些组合,但我总是遇到类似的错误。
如何将此静态 tensor
类型转换为 ndarray
类型?
我找到了解决这个问题的方法(就我而言)。
基本上我改变了如何读取图像的模式,使用 cv2.imread() 而不是 tf.image.decode_image(),然后扩展图像以获得带有 [=15= 的 ndarray ] (形状=(1,高度,宽度,3))
for image in images:
image_name = image.filename #takes the filename
image_names.append(image_name) #append
####new lines added/changed#####
image_cv = cv2.imread(os.path.join(PATH_TEST_IMAGE, image_name))
image_cv = cv2.cvtColor(image_cv, cv2.COLOR_BGR2RGB)
image_expanded = np.expand_dims(image_cv, axis=0)
初学者到 tensorflow
(使用 1.15.x
版本和 flask
)。
我已经构建了我的 object detector
(使用来自 TensorFlow 的对象检测-API,并在本地提取 inference_graph
和所有 ckpoint
文件)。
现在我想开始一个 flask API
,我想在其中使用 request.files.getlist
函数并在 main
脚本中 运行 推理(过程类似this project,主要脚本在app.py
)。
我的方法与链接方法之间的一个区别是我没有使用 yolo
并且我试图在主函数中写下所有必要的变量。
这是我的代码:
#list of imported packages (..)
# customize your API through the following parameters
MODEL_NAME = './inference_graph' # directory within the frozen graph (obj detector) inside
# Path to frozen detection graph .pb file, which contains the model that is used for object detection.
PATH_TO_CKPT = os.path.join(MODEL_NAME,'frozen_inference_graph.pb')
# Path to label map file
PATH_TO_LABELS = './training/labelmap.pbtxt'
# Path to test image folder (here i upload a test set folder into object det folder)
PATH_TEST_IMAGE = './Test_Folder_Inference' # sample folder to test (with 4 images) the API
# Number of classes the object detector can identify
NUM_CLASSES = 1
# load LABEL MAP vars
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
# load the TF model into "memory"
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: # 'rb' =read binary
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
inf_sess = tf.Session(
graph=detection_graph) # initialize the var for the "session" of the graph (session runs the graph operations)
# Define input and output Tensors (variables) for the graph (detection_graph)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
# So output tensors are the detection boxes, scores and classes
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
# Number of objects detected
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Initialize Flask application
app = Flask(__name__)
# API that returns JSON with classes found in images
@app.route('/detections', methods=['POST']) # app route/endpoint + spec the method [POST in this case]
def get_detections(): # define the function
raw_images = [] # create a list to store/append the req. images
images = request.files.getlist('images') ##request fx from Flask
image_names = []
print(len(images)) # just a check to see if images are 'processed' within the request.files.getlist
for image in images:
image_name = image.filename
image_names.append(image_name)
image.save(os.path.join(PATH_TEST_IMAGE, image_name))
img_raw = tf.image.decode_image(
open(image_name, 'rb').read(), channels=3) # decoding of file/img
raw_images.append(img_raw) # append (final list)
num = 0
# create list for final response
response = []
for j in range(len(image_names)): # potrei inserire una print sulla len (per vedere se è 'nulla'..)
# create list of responses for current image
raw_img = raw_images[j] #here, every single raw_img is a "tensor"
num += 1
image_expanded = np.expand_dims(raw_img, axis=0) #expand the batch dim/shape (but the
# Perform the actual detection by running the model with the image_exp as input (boxes can be deleted/unused var)
(boxes, scores, classes, num) = inf_sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_expanded})
##and then the script continues (but the only issue is up here)
当我 运行 这个脚本(通过命令提示符使用 curl)时 returns 出现以下错误:
Cannot convert a symbolic Tensor (decode_image/cond_jpeg/Merge:0) to a numpy array
我尝试将 force/convert
作为 np.array
和 image_expanded
,但不起作用(尝试了一些组合,但我总是遇到类似的错误。
如何将此静态 tensor
类型转换为 ndarray
类型?
我找到了解决这个问题的方法(就我而言)。
基本上我改变了如何读取图像的模式,使用 cv2.imread() 而不是 tf.image.decode_image(),然后扩展图像以获得带有 [=15= 的 ndarray ] (形状=(1,高度,宽度,3))
for image in images:
image_name = image.filename #takes the filename
image_names.append(image_name) #append
####new lines added/changed#####
image_cv = cv2.imread(os.path.join(PATH_TEST_IMAGE, image_name))
image_cv = cv2.cvtColor(image_cv, cv2.COLOR_BGR2RGB)
image_expanded = np.expand_dims(image_cv, axis=0)