运行 随后有多个 tensorflow 会话

Running multiple tensorflow sessions subsequently

我正在使用 gunicorn 和 flask 开发一个简单的 REST 控制器。

在每次 REST 调用时,我执行以下代码

@app.route('/objects', methods=['GET'])
def get_objects():
    video_title = request.args.get('video_title')
    video_path = "../../video/" + video_title
    cl.logger.info(video_path)
    start = request.args.get('start')
    stop = request.args.get('stop')
    scene = [start, stop]

    frames = images_utils.extract_frames(video_path, scene[0], scene[1], 1)
    cl.logger.info(scene[0]+" "+scene[1])
    objects = list()
    ##objects
    model = GenericDetector('../resources/open_images/frozen_inference_graph.pb', '../resources/open_images/labels.txt')
    model.run(frames)
    for result in model.get_boxes_and_labels():
        if result is not None:
            objects.append(result)

    data = {'message': {
        'start_time': scene[0],
        'end_time': scene[1],
        'path': video_path,
        'objects':objects,
    }, 'metadata_type': 'detection'}

    return jsonify({'status': data}), 200

此代码运行一个 tensorflow 冻结模型如下:

class GenericDetector(Process):

    def __init__(self, model, labels):
        # ## Load a (frozen) Tensorflow model into memory.
        self.detection_graph = tf.Graph()
        with self.detection_graph.as_default():
            od_graph_def = tf.GraphDef()
            with tf.gfile.GFile(model, 'rb') as fid:
                serialized_graph = fid.read()
                od_graph_def.ParseFromString(serialized_graph)
                tf.import_graph_def(od_graph_def, name='')

        self.boxes_and_labels = []

        # ## Loading label map
        with open(labels) as f:
            txt_labels = f.read()
            self.labels = json.loads(txt_labels)


    def run(self, frames):
        tf.reset_default_graph()
        with self.detection_graph.as_default():
            config = tf.ConfigProto()
            config.gpu_options.allow_growth = True
            with tf.Session(graph=self.detection_graph, config=config) as sess:

                image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0')
                # Each box represents a part of the image where a particular object was detected.
                detection_boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0')
                # Each score represent how level of confidence for each of the objects.
                detection_scores = self.detection_graph.get_tensor_by_name('detection_scores:0')
                detection_classes = self.detection_graph.get_tensor_by_name('detection_classes:0')
                num_detections = self.detection_graph.get_tensor_by_name('num_detections:0')

                i = 0
                for frame in frames:

                    # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
                    image_np_expanded = np.expand_dims(frame, axis=0)

                    # Actual detection.
                    (boxes, scores, classes, num) = sess.run(
                        [detection_boxes, detection_scores, detection_classes, num_detections], \
                        feed_dict={image_tensor: image_np_expanded})

                    boxes = np.squeeze(boxes)
                    classes = np.squeeze(classes).astype(np.int32)
                    scores = np.squeeze(scores)

                    for j, box in enumerate(boxes):
                        if all(v == 0 for v in box):
                            continue

                        self.boxes_and_labels.append(
                            {
                                "ymin": str(box[0]),
                                "xmin": str(box[1]),
                                "ymax": str(box[2]),
                                "xmax": str(box[3]),
                                "label": self.labels[str(classes[j])],
                                "score": str(scores[j]),
                                "frame":i
                            })
                    i += 1
            sess.close()
    def get_boxes_and_labels(self):
        return self.boxes_and_labels

一切似乎都正常工作,但一旦我向我的服务器发送第二个请求,我的 GPU(GTX 1050)内存不足:

ResourceExhaustedError (see above for traceback): OOM when allocating tensor of shape [3,3,256,256] and type float

如果我在那之后尝试拨打电话,大部分时间都可以。有时它也适用于后续调用。我尝试在单独的进程上执行 GenericDetector(使 GEnericDetector 继承进程),但它没有帮助。我读到一旦执行 REST GET 的进程死了,GPU 的内存应该被释放,所以我也尝试在执行 tensorflow 模型后添加一个 sleep(30),但没有运气。我哪里做错了?

此错误意味着您正试图将比可用内存更大的东西装入 GPU。也许您可以减少模型中某处的参数数量以使其更轻?

问题是 Tensorflow 为进程分配内存而不是 Session,关闭会话是不够的(即使你把 allow_growth option)。

The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations: it starts out allocating very little memory, and as Sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process. Note that we do not release memory, since that can lead to even worse memory fragmentation.

TF github 上有一个 issue 和一些解决方案,例如,您可以使用线程中提出的 RunAsCUDASubprocess 装饰您的 运行 方法。