AVFoundation - 反转 AVAsset 并输出视频文件
AVFoundation - Reverse an AVAsset and output video file
我见过这个问题被问过几次,但是 none 似乎有任何可行的答案。
要求反转并输出视频文件(不仅仅是反转播放),保持与源视频相同的压缩、格式和帧速率。
理想情况下,该解决方案能够在内存或缓冲区中完成所有这些操作,并避免将帧生成为图像文件(例如:使用 AVAssetImageGenerator
)然后重新编译它(资源密集型、不可靠的计时结果,frame/image 质量与原始质量的变化等)。
--
我的贡献:
这仍然无法正常工作,但到目前为止我已经尝试过的最好的方法:
- 使用
AVAssetReader
. 将样本帧读入 CMSampleBufferRef[]
的数组
- 使用
AVAssetWriter
倒序写回。
- 问题:似乎每个帧的时间都保存在
CMSampleBufferRef
中,所以即使向后追加它们也不起作用。
- 接下来,我尝试将每个帧的时间信息与 reverse/mirror 帧交换。
- 问题:这会导致
AVAssetWriter
发生未知错误。
下一步:我要调查 AVAssetWriterInputPixelBufferAdaptor
- (AVAsset *)assetByReversingAsset:(AVAsset *)asset {
NSURL *tmpFileURL = [NSURL URLWithString:@"/tmp/test.mp4"];
NSError *error;
// initialize the AVAssetReader that will read the input asset track
AVAssetReader *reader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] lastObject];
AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:nil];
[reader addOutput:readerOutput];
[reader startReading];
// Read in the samples into an array
NSMutableArray *samples = [[NSMutableArray alloc] init];
while(1) {
CMSampleBufferRef sample = [readerOutput copyNextSampleBuffer];
if (sample == NULL) {
break;
}
[samples addObject:(__bridge id)sample];
CFRelease(sample);
}
// initialize the the writer that will save to our temporary file.
CMFormatDescriptionRef formatDescription = CFBridgingRetain([videoTrack.formatDescriptions lastObject]);
AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:nil sourceFormatHint:formatDescription];
CFRelease(formatDescription);
AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:tmpFileURL
fileType:AVFileTypeMPEG4
error:&error];
[writerInput setExpectsMediaDataInRealTime:NO];
[writer addInput:writerInput];
[writer startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp((__bridge CMSampleBufferRef)samples[0])];
[writer startWriting];
// Traverse the sample frames in reverse order
for(NSInteger i = samples.count-1; i >= 0; i--) {
CMSampleBufferRef sample = (__bridge CMSampleBufferRef)samples[i];
// Since the timing information is built into the CMSampleBufferRef
// We will need to make a copy of it with new timing info. Will copy
// the timing data from the mirror frame at samples[samples.count - i -1]
CMItemCount numSampleTimingEntries;
CMSampleBufferGetSampleTimingInfoArray((__bridge CMSampleBufferRef)samples[samples.count - i -1], 0, nil, &numSampleTimingEntries);
CMSampleTimingInfo *timingInfo = malloc(sizeof(CMSampleTimingInfo) * numSampleTimingEntries);
CMSampleBufferGetSampleTimingInfoArray((__bridge CMSampleBufferRef)sample, numSampleTimingEntries, timingInfo, &numSampleTimingEntries);
CMSampleBufferRef sampleWithCorrectTiming;
CMSampleBufferCreateCopyWithNewTiming(
kCFAllocatorDefault,
sample,
numSampleTimingEntries,
timingInfo,
&sampleWithCorrectTiming);
if (writerInput.readyForMoreMediaData) {
[writerInput appendSampleBuffer:sampleWithCorrectTiming];
}
CFRelease(sampleWithCorrectTiming);
free(timingInfo);
}
[writer finishWriting];
return [AVAsset assetWithURL:tmpFileURL];
}
过去几天一直在处理这个问题,并且能够让它正常工作。
源代码在这里:http://www.andyhin.com/post/5/reverse-video-avfoundation
使用AVAssetReader
读出samples/frames,提取image/pixel缓冲区,然后追加镜像帧的呈现时间。
我见过这个问题被问过几次,但是 none 似乎有任何可行的答案。
要求反转并输出视频文件(不仅仅是反转播放),保持与源视频相同的压缩、格式和帧速率。
理想情况下,该解决方案能够在内存或缓冲区中完成所有这些操作,并避免将帧生成为图像文件(例如:使用 AVAssetImageGenerator
)然后重新编译它(资源密集型、不可靠的计时结果,frame/image 质量与原始质量的变化等)。
--
我的贡献: 这仍然无法正常工作,但到目前为止我已经尝试过的最好的方法:
- 使用
AVAssetReader
. 将样本帧读入 - 使用
AVAssetWriter
倒序写回。 - 问题:似乎每个帧的时间都保存在
CMSampleBufferRef
中,所以即使向后追加它们也不起作用。 - 接下来,我尝试将每个帧的时间信息与 reverse/mirror 帧交换。
- 问题:这会导致
AVAssetWriter
发生未知错误。 下一步:我要调查
AVAssetWriterInputPixelBufferAdaptor
- (AVAsset *)assetByReversingAsset:(AVAsset *)asset { NSURL *tmpFileURL = [NSURL URLWithString:@"/tmp/test.mp4"]; NSError *error; // initialize the AVAssetReader that will read the input asset track AVAssetReader *reader = [[AVAssetReader alloc] initWithAsset:asset error:&error]; AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] lastObject]; AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:nil]; [reader addOutput:readerOutput]; [reader startReading]; // Read in the samples into an array NSMutableArray *samples = [[NSMutableArray alloc] init]; while(1) { CMSampleBufferRef sample = [readerOutput copyNextSampleBuffer]; if (sample == NULL) { break; } [samples addObject:(__bridge id)sample]; CFRelease(sample); } // initialize the the writer that will save to our temporary file. CMFormatDescriptionRef formatDescription = CFBridgingRetain([videoTrack.formatDescriptions lastObject]); AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:nil sourceFormatHint:formatDescription]; CFRelease(formatDescription); AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:tmpFileURL fileType:AVFileTypeMPEG4 error:&error]; [writerInput setExpectsMediaDataInRealTime:NO]; [writer addInput:writerInput]; [writer startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp((__bridge CMSampleBufferRef)samples[0])]; [writer startWriting]; // Traverse the sample frames in reverse order for(NSInteger i = samples.count-1; i >= 0; i--) { CMSampleBufferRef sample = (__bridge CMSampleBufferRef)samples[i]; // Since the timing information is built into the CMSampleBufferRef // We will need to make a copy of it with new timing info. Will copy // the timing data from the mirror frame at samples[samples.count - i -1] CMItemCount numSampleTimingEntries; CMSampleBufferGetSampleTimingInfoArray((__bridge CMSampleBufferRef)samples[samples.count - i -1], 0, nil, &numSampleTimingEntries); CMSampleTimingInfo *timingInfo = malloc(sizeof(CMSampleTimingInfo) * numSampleTimingEntries); CMSampleBufferGetSampleTimingInfoArray((__bridge CMSampleBufferRef)sample, numSampleTimingEntries, timingInfo, &numSampleTimingEntries); CMSampleBufferRef sampleWithCorrectTiming; CMSampleBufferCreateCopyWithNewTiming( kCFAllocatorDefault, sample, numSampleTimingEntries, timingInfo, &sampleWithCorrectTiming); if (writerInput.readyForMoreMediaData) { [writerInput appendSampleBuffer:sampleWithCorrectTiming]; } CFRelease(sampleWithCorrectTiming); free(timingInfo); } [writer finishWriting]; return [AVAsset assetWithURL:tmpFileURL]; }
CMSampleBufferRef[]
的数组
过去几天一直在处理这个问题,并且能够让它正常工作。
源代码在这里:http://www.andyhin.com/post/5/reverse-video-avfoundation
使用AVAssetReader
读出samples/frames,提取image/pixel缓冲区,然后追加镜像帧的呈现时间。