AVCaptureAudioDataOutputSampleBufferDelegate 未调用 captureOutput
captureOutput not being called by AVCaptureAudioDataOutputSampleBufferDelegate
我有一个录制视频的应用程序,但我需要它向用户实时显示麦克风捕获的声音的音高水平。我已经能够使用 AVCaptureSession
成功地将音频和视频录制到 MP4。但是,当我将 AVCaptureAudioDataOutput
添加到会话并分配 AVCaptureAudioDataOutputSampleBufferDelegate
时,我没有收到任何错误,但是一旦会话开始,就不会调用 captureOutput
函数。
代码如下:
import UIKit
import AVFoundation
import CoreLocation
class ViewController: UIViewController,
AVCaptureVideoDataOutputSampleBufferDelegate,
AVCaptureFileOutputRecordingDelegate, CLLocationManagerDelegate ,
AVCaptureAudioDataOutputSampleBufferDelegate {
var videoFileOutput: AVCaptureMovieFileOutput!
let session = AVCaptureSession()
var outputURL: URL!
var timer:Timer!
var locationManager:CLLocationManager!
var currentMagnitudeValue:CGFloat!
var defaultMagnitudeValue:CGFloat!
var visualMagnitudeValue:CGFloat!
var soundLiveOutput: AVCaptureAudioDataOutput!
override func viewDidLoad() {
super.viewDidLoad()
self.setupAVCapture()
}
func setupAVCapture(){
session.beginConfiguration()
//Add the camera INPUT to the session
let videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video, position: .front)
guard
let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice!),
session.canAddInput(videoDeviceInput)
else { return }
session.addInput(videoDeviceInput)
//Add the microphone INPUT to the session
let microphoneDevice = AVCaptureDevice.default(.builtInMicrophone, for: .audio, position: .unspecified)
guard
let audioDeviceInput = try? AVCaptureDeviceInput(device: microphoneDevice!),
session.canAddInput(audioDeviceInput)
else { return }
session.addInput(audioDeviceInput)
//Add the video file OUTPUT to the session
videoFileOutput = AVCaptureMovieFileOutput()
guard session.canAddOutput(videoFileOutput) else {return}
if (session.canAddOutput(videoFileOutput)) {
session.addOutput(videoFileOutput)
}
//Add the audio output so we can get PITCH of the sounds
//AND assign the SampleBufferDelegate
soundLiveOutput = AVCaptureAudioDataOutput()
soundLiveOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "test"))
if (session.canAddOutput(soundLiveOutput)) {
session.addOutput(soundLiveOutput)
print ("Live AudioDataOutput added")
} else
{
print("Could not add AudioDataOutput")
}
//Preview Layer
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
let rootLayer :CALayer = self.cameraView.layer
rootLayer.masksToBounds=true
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill;
//Finalize the session
session.commitConfiguration()
//Begin the session
session.startRunning()
}
func captureOutput(_: AVCaptureOutput, didOutput: CMSampleBuffer, from:
AVCaptureConnection) {
print("Bingo")
}
}
预期输出:
Bingo
Bingo
Bingo
...
我已阅读:
- 用户未正确声明 captureOutput 方法。
- 用户根本没有声明 captureOutput 方法。
Apple - AVCaptureAudioDataOutputSampleBufferDelegate - Apple 关于委托及其方法的文档 - 该方法与我声明的方法相匹配。
我在网上遇到的其他常见错误:
- 使用旧版本 Swift 的声明(我使用的是 v4.1)
- 显然在 Swift 4.0 之后的一篇文章中,
AVCaptureMetadataOutput
替换了 AVCaptureAudioDataOutput
- 虽然我在 Apple 的文档中找不到这个,但我也试过了,但同样,metadataOutput
函数从未被调用。
我的想法很新鲜。我是否遗漏了一些明显的东西?
好的,没有人回复我,但在尝试之后,我找到了为 Swift4 声明 captureOutput 方法的正确方法如下:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Do your stuff here
}
不幸的是,这个在线文档很差。我猜你只需要完全正确 - 如果你拼错或错误命名变量不会抛出任何错误,因为它是一个可选函数。
您使用的方法已更新为此方法,AVCaptureAudioDataOutput 和 AVCaptureVideoDataOutput 都会调用该方法。您确保在将示例缓冲区写入资产编写器之前检查输出。
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Make sure you check the output before using sample buffer
if output == audioDataOutput {
//Use sample buffer for audio
}
}
我的问题原来是这个,AVAudioSession 和 AVCaptureSession 声明为局部变量,当我启动会话时,它就消失了。一旦我将它们移动到 class 级变量,一切都很好!
我有一个录制视频的应用程序,但我需要它向用户实时显示麦克风捕获的声音的音高水平。我已经能够使用 AVCaptureSession
成功地将音频和视频录制到 MP4。但是,当我将 AVCaptureAudioDataOutput
添加到会话并分配 AVCaptureAudioDataOutputSampleBufferDelegate
时,我没有收到任何错误,但是一旦会话开始,就不会调用 captureOutput
函数。
代码如下:
import UIKit
import AVFoundation
import CoreLocation
class ViewController: UIViewController,
AVCaptureVideoDataOutputSampleBufferDelegate,
AVCaptureFileOutputRecordingDelegate, CLLocationManagerDelegate ,
AVCaptureAudioDataOutputSampleBufferDelegate {
var videoFileOutput: AVCaptureMovieFileOutput!
let session = AVCaptureSession()
var outputURL: URL!
var timer:Timer!
var locationManager:CLLocationManager!
var currentMagnitudeValue:CGFloat!
var defaultMagnitudeValue:CGFloat!
var visualMagnitudeValue:CGFloat!
var soundLiveOutput: AVCaptureAudioDataOutput!
override func viewDidLoad() {
super.viewDidLoad()
self.setupAVCapture()
}
func setupAVCapture(){
session.beginConfiguration()
//Add the camera INPUT to the session
let videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video, position: .front)
guard
let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice!),
session.canAddInput(videoDeviceInput)
else { return }
session.addInput(videoDeviceInput)
//Add the microphone INPUT to the session
let microphoneDevice = AVCaptureDevice.default(.builtInMicrophone, for: .audio, position: .unspecified)
guard
let audioDeviceInput = try? AVCaptureDeviceInput(device: microphoneDevice!),
session.canAddInput(audioDeviceInput)
else { return }
session.addInput(audioDeviceInput)
//Add the video file OUTPUT to the session
videoFileOutput = AVCaptureMovieFileOutput()
guard session.canAddOutput(videoFileOutput) else {return}
if (session.canAddOutput(videoFileOutput)) {
session.addOutput(videoFileOutput)
}
//Add the audio output so we can get PITCH of the sounds
//AND assign the SampleBufferDelegate
soundLiveOutput = AVCaptureAudioDataOutput()
soundLiveOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "test"))
if (session.canAddOutput(soundLiveOutput)) {
session.addOutput(soundLiveOutput)
print ("Live AudioDataOutput added")
} else
{
print("Could not add AudioDataOutput")
}
//Preview Layer
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
let rootLayer :CALayer = self.cameraView.layer
rootLayer.masksToBounds=true
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill;
//Finalize the session
session.commitConfiguration()
//Begin the session
session.startRunning()
}
func captureOutput(_: AVCaptureOutput, didOutput: CMSampleBuffer, from:
AVCaptureConnection) {
print("Bingo")
}
}
预期输出:
Bingo
Bingo
Bingo
...
我已阅读:
Apple - AVCaptureAudioDataOutputSampleBufferDelegate - Apple 关于委托及其方法的文档 - 该方法与我声明的方法相匹配。
我在网上遇到的其他常见错误:
- 使用旧版本 Swift 的声明(我使用的是 v4.1)
- 显然在 Swift 4.0 之后的一篇文章中,
AVCaptureMetadataOutput
替换了AVCaptureAudioDataOutput
- 虽然我在 Apple 的文档中找不到这个,但我也试过了,但同样,metadataOutput
函数从未被调用。
我的想法很新鲜。我是否遗漏了一些明显的东西?
好的,没有人回复我,但在尝试之后,我找到了为 Swift4 声明 captureOutput 方法的正确方法如下:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Do your stuff here
}
不幸的是,这个在线文档很差。我猜你只需要完全正确 - 如果你拼错或错误命名变量不会抛出任何错误,因为它是一个可选函数。
您使用的方法已更新为此方法,AVCaptureAudioDataOutput 和 AVCaptureVideoDataOutput 都会调用该方法。您确保在将示例缓冲区写入资产编写器之前检查输出。
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Make sure you check the output before using sample buffer
if output == audioDataOutput {
//Use sample buffer for audio
}
}
我的问题原来是这个,AVAudioSession 和 AVCaptureSession 声明为局部变量,当我启动会话时,它就消失了。一旦我将它们移动到 class 级变量,一切都很好!