从 CMSampleBufferRef 获取 YUV 用于视频流
Getting YUV from CMSampleBufferRef for video streaming
我正在构建一个 iOS 视频流聊天应用程序,而我正在使用的库要求我将视频数据单独发送给 YUV(或者我猜是 YCbCr)数据。
我已经设置了委托,但我不确定如何从 CMSampleBufferRef
中添加单独的 YUV 元素。我看到的很多 Apple 指南都参考了有关将视频帧捕获到 UIImages 的内容。
流格式
- (BOOL)setupWithError:(NSError **)error
{
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:error];
if (! videoInput) {
return NO;
}
[self.captureSession addInput:videoInput];
self.processingQueue = dispatch_queue_create("abcdefghijk", NULL);
[self.dataOutput setAlwaysDiscardsLateVideoFrames:YES];
NSNumber *value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
[self.dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:value
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[self.dataOutput setSampleBufferDelegate:self queue:self.processingQueue];
return YES;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (! imageBuffer) {
return;
}
uint16_t width = CVPixelBufferGetHeight(imageBuffer);
uint16_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t yPlane[??] = ???
uint8_t uPlane[?] = ???
uint8_t vPlane[?] = ???
[self.library sendVideoFrametoFriend:self.friendNumber width:width height:height
yPlane:yPlane
uPlane:uPlane
vPlane:vPlane
error:nil];
}
有没有人有任何示例或链接可以让我解决这个问题?
更新
根据 https://wiki.videolan.org/YUV Y 的元素应该比 U/V 多。图书馆也证实了这一点,以下没有:
* Y - plane should be of size: height * width
* U - plane should be of size: (height/2) * (width/2)
* V - plane should be of size: (height/2) * (width/2)
已更新 我现在已经阅读了 YUV 缓冲区的组成方式,这就是您阅读它的方式。我还确保我不会在每一帧上都使用 malloc。
玩得开心! ;)
//int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
int yHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer,0);
int uvHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer,1);
int yWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer,0);
int uvWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer,1);
int ybpr = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
int uvbpr = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
int ysize = yHeight * ybpr ;
int uvsize = uvHeight * uvbpr ;
static unsigned char *ypane;
if(!ypane)
ypane = (unsigned char*)malloc(ysize);
static unsigned char *upane;
if(!upane)
upane = (unsigned char*)malloc(uvsize);
static unsigned char *vpane;
if(!vpane)
vpane = (unsigned char*)malloc(uvsize);
unsigned char *yBase = CVPixelBufferGetBaseAddressOfPlane(ypane, 0);
unsigned char *uBase = CVPixelBufferGetBaseAddressOfPlane(upane, 1;
unsigned char *vBase = CVPixelBufferGetBaseAddressOfPlane(vpane, 2);
for(int y=0,y<yHeight;y++)
{
for(int x=0,x<yWidth;x++)
{
ypane[y*yWidth+x]=yBase[y*ybpr+x];
}
}
for(int y=0,y<uvHeight;y++)
{
for(int x=0,x<uvWidth;x++)
{
upane[y*uvWidth+x]=uBase[y*uvbpr+x];
vpane[y*uvWidth+x]=vBase[y*uvbpr+x];
}
}
我正在构建一个 iOS 视频流聊天应用程序,而我正在使用的库要求我将视频数据单独发送给 YUV(或者我猜是 YCbCr)数据。
我已经设置了委托,但我不确定如何从 CMSampleBufferRef
中添加单独的 YUV 元素。我看到的很多 Apple 指南都参考了有关将视频帧捕获到 UIImages 的内容。
流格式
- (BOOL)setupWithError:(NSError **)error
{
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:error];
if (! videoInput) {
return NO;
}
[self.captureSession addInput:videoInput];
self.processingQueue = dispatch_queue_create("abcdefghijk", NULL);
[self.dataOutput setAlwaysDiscardsLateVideoFrames:YES];
NSNumber *value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
[self.dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:value
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[self.dataOutput setSampleBufferDelegate:self queue:self.processingQueue];
return YES;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (! imageBuffer) {
return;
}
uint16_t width = CVPixelBufferGetHeight(imageBuffer);
uint16_t height = CVPixelBufferGetHeight(imageBuffer);
uint8_t yPlane[??] = ???
uint8_t uPlane[?] = ???
uint8_t vPlane[?] = ???
[self.library sendVideoFrametoFriend:self.friendNumber width:width height:height
yPlane:yPlane
uPlane:uPlane
vPlane:vPlane
error:nil];
}
有没有人有任何示例或链接可以让我解决这个问题?
更新 根据 https://wiki.videolan.org/YUV Y 的元素应该比 U/V 多。图书馆也证实了这一点,以下没有:
* Y - plane should be of size: height * width
* U - plane should be of size: (height/2) * (width/2)
* V - plane should be of size: (height/2) * (width/2)
已更新 我现在已经阅读了 YUV 缓冲区的组成方式,这就是您阅读它的方式。我还确保我不会在每一帧上都使用 malloc。
玩得开心! ;)
//int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
int yHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer,0);
int uvHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer,1);
int yWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer,0);
int uvWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer,1);
int ybpr = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
int uvbpr = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
int ysize = yHeight * ybpr ;
int uvsize = uvHeight * uvbpr ;
static unsigned char *ypane;
if(!ypane)
ypane = (unsigned char*)malloc(ysize);
static unsigned char *upane;
if(!upane)
upane = (unsigned char*)malloc(uvsize);
static unsigned char *vpane;
if(!vpane)
vpane = (unsigned char*)malloc(uvsize);
unsigned char *yBase = CVPixelBufferGetBaseAddressOfPlane(ypane, 0);
unsigned char *uBase = CVPixelBufferGetBaseAddressOfPlane(upane, 1;
unsigned char *vBase = CVPixelBufferGetBaseAddressOfPlane(vpane, 2);
for(int y=0,y<yHeight;y++)
{
for(int x=0,x<yWidth;x++)
{
ypane[y*yWidth+x]=yBase[y*ybpr+x];
}
}
for(int y=0,y<uvHeight;y++)
{
for(int x=0,x<uvWidth;x++)
{
upane[y*uvWidth+x]=uBase[y*uvbpr+x];
vpane[y*uvWidth+x]=vBase[y*uvbpr+x];
}
}