尝试在 swift 中合并多个视频文件时出现内存警告
Memory warning while trying to merge multiple video files in swift
我正在尝试使用 swift 将 2 个视频合并在一起。但是,当我尝试 运行 这段代码时,我会收到内存警告,有时还会崩溃。
我的预感是,由于某种原因,我提前退出 dispatch_group 并完成了写作。
不过我也注意到,有时我做不到那么远。
我也注意到我的 samples.count 有时很大,这看起来很奇怪,因为每个视频的长度不超过 30 秒。
我一直卡在从哪里开始解决这个问题。任何指点表示赞赏。
dispatch_group_enter(self.videoProcessingGroup)
asset.requestContentEditingInputWithOptions(options, completionHandler: {(contentEditingInput: PHContentEditingInput?, info: [NSObject : AnyObject]) -> Void in
let avAsset = contentEditingInput?.audiovisualAsset
let reader = try! AVAssetReader.init(asset: avAsset!)
let videoTrack = avAsset?.tracksWithMediaType(AVMediaTypeVideo).first
let readerOutputSettings: [String:Int] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
let readerOutput = AVAssetReaderTrackOutput(track: videoTrack!, outputSettings: readerOutputSettings)
reader.addOutput(readerOutput)
reader.startReading()
//Create the samples
var samples:[CMSampleBuffer] = []
var sample: CMSampleBufferRef?
sample = readerOutput.copyNextSampleBuffer()
while (sample != nil)
{
autoreleasepool {
samples.append(sample!)
sample = readerOutput.copyNextSampleBuffer()
}
}
for i in 0...samples.count - 1 {
// Get the presentation time for the frame
var append_ok:Bool = false
autoreleasepool {
if let pixelBufferPool = adaptor.pixelBufferPool {
let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
kCFAllocatorDefault,
pixelBufferPool,
pixelBufferPointer
)
let frameTime = CMTimeMake(Int64(frameCount), 30)
if var buffer = pixelBufferPointer.memory where status == 0 {
buffer = CMSampleBufferGetImageBuffer(samples[i])!
append_ok = adaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime)
pixelBufferPointer.destroy()
} else {
NSLog("Error: Failed to allocate pixel buffer from pool")
}
pixelBufferPointer.dealloc(1)
dispatch_group_leave(self.videoProcessingGroup)
}
}
}
})
//Finish the session:
dispatch_group_notify(videoProcessingGroup, dispatch_get_main_queue(), {
videoWriterInput.markAsFinished()
videoWriter.finishWritingWithCompletionHandler({
print("Write Ended")
// Return writer
print("Created asset writer for \(size.width)x\(size.height) video")
})
})
一般来说,您无法将视频资产的所有帧都放入 iOS 设备甚至台式机的内存中:
var samples:[CMSampleBuffer] = []
即使视频长达 30 秒也不行。例如以每秒 30 帧的速度,一个 720p、30 秒的视频,解码为 BGRA 需要 30 * 30 * 1280 * 720 * 4 bytes = 3.2GB
内存。每帧都是3.5MB
!如果您使用 1080p 或更高的帧率,情况会更糟。
您需要逐帧逐步合并文件,在任何给定时间在内存中保留尽可能少的帧。
但是对于像合并这样简单的操作,您应该不需要自己处理帧。您可以创建一个 AVMutableComposition
,附加单个 AVAsset
,然后使用 AVAssetExportSession
.
导出合并的文件
我正在尝试使用 swift 将 2 个视频合并在一起。但是,当我尝试 运行 这段代码时,我会收到内存警告,有时还会崩溃。
我的预感是,由于某种原因,我提前退出 dispatch_group 并完成了写作。
不过我也注意到,有时我做不到那么远。
我也注意到我的 samples.count 有时很大,这看起来很奇怪,因为每个视频的长度不超过 30 秒。
我一直卡在从哪里开始解决这个问题。任何指点表示赞赏。
dispatch_group_enter(self.videoProcessingGroup)
asset.requestContentEditingInputWithOptions(options, completionHandler: {(contentEditingInput: PHContentEditingInput?, info: [NSObject : AnyObject]) -> Void in
let avAsset = contentEditingInput?.audiovisualAsset
let reader = try! AVAssetReader.init(asset: avAsset!)
let videoTrack = avAsset?.tracksWithMediaType(AVMediaTypeVideo).first
let readerOutputSettings: [String:Int] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
let readerOutput = AVAssetReaderTrackOutput(track: videoTrack!, outputSettings: readerOutputSettings)
reader.addOutput(readerOutput)
reader.startReading()
//Create the samples
var samples:[CMSampleBuffer] = []
var sample: CMSampleBufferRef?
sample = readerOutput.copyNextSampleBuffer()
while (sample != nil)
{
autoreleasepool {
samples.append(sample!)
sample = readerOutput.copyNextSampleBuffer()
}
}
for i in 0...samples.count - 1 {
// Get the presentation time for the frame
var append_ok:Bool = false
autoreleasepool {
if let pixelBufferPool = adaptor.pixelBufferPool {
let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
kCFAllocatorDefault,
pixelBufferPool,
pixelBufferPointer
)
let frameTime = CMTimeMake(Int64(frameCount), 30)
if var buffer = pixelBufferPointer.memory where status == 0 {
buffer = CMSampleBufferGetImageBuffer(samples[i])!
append_ok = adaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime)
pixelBufferPointer.destroy()
} else {
NSLog("Error: Failed to allocate pixel buffer from pool")
}
pixelBufferPointer.dealloc(1)
dispatch_group_leave(self.videoProcessingGroup)
}
}
}
})
//Finish the session:
dispatch_group_notify(videoProcessingGroup, dispatch_get_main_queue(), {
videoWriterInput.markAsFinished()
videoWriter.finishWritingWithCompletionHandler({
print("Write Ended")
// Return writer
print("Created asset writer for \(size.width)x\(size.height) video")
})
})
一般来说,您无法将视频资产的所有帧都放入 iOS 设备甚至台式机的内存中:
var samples:[CMSampleBuffer] = []
即使视频长达 30 秒也不行。例如以每秒 30 帧的速度,一个 720p、30 秒的视频,解码为 BGRA 需要 30 * 30 * 1280 * 720 * 4 bytes = 3.2GB
内存。每帧都是3.5MB
!如果您使用 1080p 或更高的帧率,情况会更糟。
您需要逐帧逐步合并文件,在任何给定时间在内存中保留尽可能少的帧。
但是对于像合并这样简单的操作,您应该不需要自己处理帧。您可以创建一个 AVMutableComposition
,附加单个 AVAsset
,然后使用 AVAssetExportSession
.