iOS 相机面部跟踪 (Swift 3 Xcode 8)
iOS camera facetracking (Swift 3 Xcode 8)
我正在尝试制作一个前置摄像头可以检测人脸的简单相机应用程序。
这应该很简单:
创建一个继承自 UIImage 的 CameraView class,并将其放置在 UI 中。确保它实现了 AVCaptureVideoDataOutputSampleBufferDelegate,以便实时处理来自相机的帧。
class CameraView: UIImageView, AVCaptureVideoDataOutputSampleBufferDelegate
在实例化 CameraView 时调用的函数 handleCamera 中,设置一个 AVCapture 会话。添加来自相机的输入。
override init(frame: CGRect) {
super.init(frame:frame)
handleCamera()
}
func handleCamera () {
camera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera,
mediaType: AVMediaTypeVideo, position: .front)
session = AVCaptureSession()
// Set recovered camera as an input device for the capture session
do {
try input = AVCaptureDeviceInput(device: camera);
} catch _ as NSError {
print ("ERROR: Front camera can't be used as input")
input = nil
}
// Add the input from the camera to the capture session
if (session?.canAddInput(input) == true) {
session?.addInput(input)
}
创建输出。创建一个串行输出队列以将数据传递到该队列,然后由 AVCaptureVideoDataOutputSampleBufferDelegate(在本例中为 class 本身)进行处理。将输出添加到会话。
output = AVCaptureVideoDataOutput()
output?.alwaysDiscardsLateVideoFrames = true
outputQueue = DispatchQueue(label: "outputQueue")
output?.setSampleBufferDelegate(self, queue: outputQueue)
// add front camera output to the session for use and modification
if(session?.canAddOutput(output) == true){
session?.addOutput(output)
} // front camera can't be used as output, not working: handle error
else {
print("ERROR: Output not viable")
}
设置相机预览视图和运行会话
// Setup camera preview with the session input
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
previewLayer?.frame = self.bounds
self.layer.addSublayer(previewLayer!)
// Process the camera and run it onto the preview
session?.startRunning()
在委托的captureOutput函数运行中,将接收到的样本缓冲区转换为CIImage以检测人脸。如果找到人脸,请反馈。
func captureOutput(_ captureOutput: AVCaptureOutput!, didDrop sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
let faces = faceDetector?.features(in: cameraImage)
for face in faces as! [CIFaceFeature] {
print("Found bounds are \(face.bounds)")
let faceBox = UIView(frame: face.bounds)
faceBox.layer.borderWidth = 3
faceBox.layer.borderColor = UIColor.red.cgColor
faceBox.backgroundColor = UIColor.clear
self.addSubview(faceBox)
if face.hasLeftEyePosition {
print("Left eye bounds are \(face.leftEyePosition)")
}
if face.hasRightEyePosition {
print("Right eye bounds are \(face.rightEyePosition)")
}
}
}
我的问题:我可以得到摄像头 运行ning,但是我在整个互联网上尝试了多种不同的代码,但我从来没有能够得到 captureOutput 来检测人脸。要么应用程序没有进入函数,要么因为变量不起作用而崩溃,最常见的情况是 sampleBuffer 变量为 nul。
我做错了什么?
您需要将 captureOutput
函数参数更改为以下内容:func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
您的 captureOutput
函数在缓冲区丢失时调用,而不是在从相机获取时调用。
我正在尝试制作一个前置摄像头可以检测人脸的简单相机应用程序。 这应该很简单:
创建一个继承自 UIImage 的 CameraView class,并将其放置在 UI 中。确保它实现了 AVCaptureVideoDataOutputSampleBufferDelegate,以便实时处理来自相机的帧。
class CameraView: UIImageView, AVCaptureVideoDataOutputSampleBufferDelegate
在实例化 CameraView 时调用的函数 handleCamera 中,设置一个 AVCapture 会话。添加来自相机的输入。
override init(frame: CGRect) { super.init(frame:frame) handleCamera() } func handleCamera () { camera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .front) session = AVCaptureSession() // Set recovered camera as an input device for the capture session do { try input = AVCaptureDeviceInput(device: camera); } catch _ as NSError { print ("ERROR: Front camera can't be used as input") input = nil } // Add the input from the camera to the capture session if (session?.canAddInput(input) == true) { session?.addInput(input) }
创建输出。创建一个串行输出队列以将数据传递到该队列,然后由 AVCaptureVideoDataOutputSampleBufferDelegate(在本例中为 class 本身)进行处理。将输出添加到会话。
output = AVCaptureVideoDataOutput() output?.alwaysDiscardsLateVideoFrames = true outputQueue = DispatchQueue(label: "outputQueue") output?.setSampleBufferDelegate(self, queue: outputQueue) // add front camera output to the session for use and modification if(session?.canAddOutput(output) == true){ session?.addOutput(output) } // front camera can't be used as output, not working: handle error else { print("ERROR: Output not viable") }
设置相机预览视图和运行会话
// Setup camera preview with the session input previewLayer = AVCaptureVideoPreviewLayer(session: session) previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait previewLayer?.frame = self.bounds self.layer.addSublayer(previewLayer!) // Process the camera and run it onto the preview session?.startRunning()
在委托的captureOutput函数运行中,将接收到的样本缓冲区转换为CIImage以检测人脸。如果找到人脸,请反馈。
func captureOutput(_ captureOutput: AVCaptureOutput!, didDrop sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!) let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh] let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy) let faces = faceDetector?.features(in: cameraImage) for face in faces as! [CIFaceFeature] { print("Found bounds are \(face.bounds)") let faceBox = UIView(frame: face.bounds) faceBox.layer.borderWidth = 3 faceBox.layer.borderColor = UIColor.red.cgColor faceBox.backgroundColor = UIColor.clear self.addSubview(faceBox) if face.hasLeftEyePosition { print("Left eye bounds are \(face.leftEyePosition)") } if face.hasRightEyePosition { print("Right eye bounds are \(face.rightEyePosition)") } } }
我的问题:我可以得到摄像头 运行ning,但是我在整个互联网上尝试了多种不同的代码,但我从来没有能够得到 captureOutput 来检测人脸。要么应用程序没有进入函数,要么因为变量不起作用而崩溃,最常见的情况是 sampleBuffer 变量为 nul。 我做错了什么?
您需要将 captureOutput
函数参数更改为以下内容:func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
您的 captureOutput
函数在缓冲区丢失时调用,而不是在从相机获取时调用。