在我的摄像头打开时检测人脸

Detect a face while my cam is open

我需要构建一个只有摄像头视图的应用程序,它应该检测到我的摄像头正在看一张脸,谁能指出我正确的方向? 我已经构建了一些可以检测图像上人脸的东西,但我需要使用摄像头,这是我到目前为止所做的:

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.
    NSString *path = [[NSBundle mainBundle] pathForResource:@"picture" ofType:@"JPG"];
    NSURL *url = [NSURL fileURLWithPath:path];

    CIContext *context = [CIContext contextWithOptions:nil];

    CIImage *image = [CIImage imageWithContentsOfURL:url];

    NSDictionary *options = @{CIDetectorAccuracy: CIDetectorAccuracyHigh};

    CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:options];

    NSArray *features = [detector featuresInImage:image];

}

我做了以下事情:

-(void)viewWillAppear:(BOOL)animated{
    _session = [[AVCaptureSession alloc] init];
    [_session setSessionPreset:AVCaptureSessionPresetPhoto];

    AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error;
    AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];

    if([_session canAddInput:deviceInput]){
        [_session addInput:deviceInput];
    }

    AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_session];
    [previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
    CALayer *rootLayer = [[self view] layer];
    [rootLayer setMasksToBounds:YES];

    CGRect frame = self.frameCapture.frame;
    [previewLayer setFrame:frame];

    [rootLayer insertSublayer:previewLayer atIndex:0];
    [_session startRunning];

}

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{
    for(AVMetadataObject *metadataObject in metadataObjects) {
        if([metadataObject.type isEqualToString:AVMetadataObjectTypeFace]) {

           _faceDetectedLabel.text = @"face detected";
        }
    }
}

但它仍然没有检测到任何面孔,我做错了什么吗?

您应该在获得一些数据之前添加元数据输出。

AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init];
// create a serial queue to handle metadata output
dispatch_queue_t metadataQueueOutput = dispatch_queue_create("com.YourAppName.metaDataQueue.OutputQueue", DISPATCH_QUEUE_SERIAL);
[metadataOutput setMetadataObjectsDelegate:self queue:metadataQueueOutput];
if ([_session canAddOutput:metadataOutput]) {
    [strongSelf.session addOutput:metadataOutput];
}
// set object types that you are interested, then you should not check type in output callback
metadataOutput.metadataObjectTypes = @[AVMetadataObjectTypeFace];

应该可以。如果可以,请告诉我