将视频帧从 ffmpeg 传输到 numpy 数组,而无需将整部电影加载到内存中

Pipe video frames from ffmpeg to numpy array without loading whole movie into memory

我不确定我的要求是否可行或实用,但我正在尝试尝试以有序但“按需”的方式从视频中加载帧。

基本上我现在所拥有的是通过 stdout 管道将整个未压缩的视频读入缓冲区,例如:

H, W = 1080, 1920 # video dimensions
video = '/path/to/video.mp4' # path to video

# ffmpeg command
command = [ "ffmpeg",
            '-i', video,
            '-pix_fmt', 'rgb24',
            '-f', 'rawvideo',
            'pipe:1' ]

# run ffmpeg and load all frames into numpy array (num_frames, H, W, 3)
pipe = subprocess.run(command, stdout=subprocess.PIPE, bufsize=10**8)
video = np.frombuffer(pipe.stdout, dtype=np.uint8).reshape(-1, H, W, 3)

# or alternatively load individual frames in a loop
nb_img = H*W*3 # H * W * 3 channels * 1-byte/channel
for i in range(0, len(pipe.stdout), nb_img):
    img = np.frombuffer(pipe.stdout, dtype=np.uint8, count=nb_img, offset=i).reshape(H, W, 3)

我想知道是否可以在 Python 中执行相同的过程,但无需先将整个视频加载到内存中。在我的脑海里,我正在想象这样的事情:

  1. 打开缓冲区
  2. 按需查找内存位置
  3. 正在将帧加载到 numpy 数组

我知道还有其他库(例如 OpenCV)可以实现同样的行为,但我想知道:

在不将整部电影加载到内存中的情况下查找和提取帧是可能的,而且相对简单。

当要查找的请求帧不是 关键帧.
时,会有一些加速损失 当请求FFmpeg寻找非关键帧时,它会寻找到请求帧之前最近的关键帧,并将从关键帧到请求帧的所有帧解码。

演示代码示例执行以下操作:

  • 使用 运行 帧计数器构建合成 1fps 视频 - 非常适合测试。
  • 将 FFmpeg 作为子进程执行,并将 stdout 作为输出 PIPE。
    代码示例寻找到第 11 秒,并将持续时间设置为 5 秒。
  • 从 PIPE 读取(并显示)解码的视频帧,直到没有更多帧可读取。

这是代码示例:

import numpy as np
import cv2
import subprocess as sp
import shlex

# Build synthetic 1fps video (with a frame counter):
# Set GOP size to 20 frames (place key frame every 20 frames - for testing).
#########################################################################
W, H = 320, 240 # video dimensions
video_path = 'video.mp4'  # path to video
sp.run(shlex.split(f'ffmpeg -y -f lavfi -i testsrc=size={W}x{H}:rate=1 -vcodec libx264 -g 20 -crf 17 -pix_fmt yuv420p -t 60 {video_path}'))
#########################################################################


# ffmpeg command
command = [ 'ffmpeg',
            '-ss', '00:00:11',    # Seek to 11'th second.
            '-i', video_path,
            '-pix_fmt', 'bgr24',  # brg24 for matching OpenCV
            '-f', 'rawvideo',
            '-t', '5',            # Play 5 seconds long
            'pipe:' ]

# Execute FFmpeg as sub-process with stdout as a pipe
process = sp.Popen(command, stdout=sp.PIPE, bufsize=10**8)

# Load individual frames in a loop
nb_img = H*W*3  # H * W * 3 channels * 1-byte/channel

# Read decoded video frames from the PIPE until no more frames to read
while True:
    # Read decoded video frame (in raw video format) from stdout process.
    buffer = process.stdout.read(W*H*3)

    # Break the loop if buffer length is not W*H*3 (when FFmpeg streaming ends).
    if len(buffer) != W*H*3:
        break

    img = np.frombuffer(buffer, np.uint8).reshape(H, W, 3)

    cv2.imshow('img', img)  # Show the image for testing
    cv2.waitKey(1000)

process.stdout.close()
process.wait()
cv2.destroyAllWindows()

注:
当预先知道播放持续时间时,参数 -t 5 是相关的。
如果事先不知道播放持续时间,您可以删除 -t 并在需要时中断循环。


时间测量:

  1. 测量一次读取所有帧。
  2. 在循环中逐帧测量阅读。
# 6000 frames:
sp.run(shlex.split(f'ffmpeg -y -f lavfi -i testsrc=size={W}x{H}:rate=1 -vcodec libx264 -g 20 -crf 17 -pix_fmt yuv420p -t 6000 {video_path}'))

# ffmpeg command
command = [ 'ffmpeg',
            '-ss', '00:00:11',    # Seek to 11'th second.
            '-i', video_path,
            '-pix_fmt', 'bgr24',  # brg24 for matching OpenCV
            '-f', 'rawvideo',
            '-t', '5000',         # Play 5000 seconds long (5000 frames).
            'pipe:' ]



# Load all frames into numpy array
################################################################################
t = time.time()

# run ffmpeg and load all frames into numpy array (num_frames, H, W, 3)
process = sp.run(command, stdout=sp.PIPE, bufsize=10**8)
video = np.frombuffer(process.stdout, dtype=np.uint8).reshape(-1, H, W, 3)

elapsed1 = time.time() - t
################################################################################


# Load load individual frames in a loop
################################################################################
t = time.time()

# Execute FFmpeg as sub-process with stdout as a pipe
process = sp.Popen(command, stdout=sp.PIPE, bufsize=10**8)

# Read decoded video frames from the PIPE until no more frames to read
while True:
    # Read decoded video frame (in raw video format) from stdout process.
    buffer = process.stdout.read(W*H*3)

    # Break the loop if buffer length is not W*H*3 (when FFmpeg streaming ends).
    if len(buffer) != W*H*3:
        break

    img = np.frombuffer(buffer, np.uint8).reshape(H, W, 3)

elapsed2 = time.time() - t

process.wait()


################################################################################

print(f'Read all frames at once elapsed time: {elapsed1}')
print(f'Read frame by frame elapsed time: {elapsed2}')

结果:

Read all frames at once elapsed time: 7.371837854385376

Read frame by frame elapsed time: 10.089557886123657

结果显示逐帧读取有一定的开销

  • 开销比较小
    开销有可能与 Python 而不是 FFmpeg 有关。