使用 CoreML 时的 DSPGraph

DSPGraph when Using CoreML

我正在尝试使用接受 audioSamples(Float32 15600) 和 returns vggishFeature (MultiArray (Float32 12288)) 的 SoundAnalysis 模型,但是我收到此错误:

 -01-22 10:45:43.404715+0000 SRTester[25654:891998] [DSPGraph] throwing DSPGraph::Exception with backtrace:
0       0x7fff2bdc0df9  DSPGraph::Graph::processMultiple(DSPGraph::GraphIOData*, DSPGraph::GraphIOData*) + 249
1       0x7fff2bd2223d  SoundAnalysis::primeGraph(DSPGraph::Graph&, int) + 542
2       0x7fff2bcfbaae  -[SNSoundClassifier primeGraph] + 134
3       0x7fff2bd052c2  -[SNAnalyzerHost primeAnalyzerGraph] + 88
4       0x7fff2bd0f268  -[SNAudioStreamAnalyzer configureAnalysisTreeWithFormat:] + 263
5       0x7fff2bd0f74b  -[SNAudioStreamAnalyzer _analyzeAudioBuffer:atAudioFramePosition:] + 303
6          0x10af1cd48  _dispatch_client_callout + 8
7          0x10af2b9bf  _dispatch_lane_barrier_sync_invoke_and_complete + 132
8       0x7fff2bd0f5d8  -[SNAudioStreamAnalyzer analyzeAudioBuffer:atAudioFramePosition:] + 121
9          0x10abb3116  $s8SRTester18homeViewControllerC16startAudioEngine33_CDAAA73F093090436FCAC2E152DEFC64LLyyFySo16AVAudioPCMBufferC_So0M4TimeCtcfU_yycfU_ + 326
10         0x10abb315d  $sIeg_IeyB_TR + 45
11         0x10af1bdd4  _dispatch_call_block_and_release + 12
12         0x10af1cd48  _dispatch_client_callout + 8
13         0x10af235ef  _dispatch_lane_serial_drain + 788
14         0x10af2417f  _dispatch_lane_invoke + 422
15         0x10af2fa4e  _dispatch_workloop_worker_thread + 719
[truncated?]
libc++abi.dylib: terminating with uncaught exception of type DSPGraph::Exception
(lldb)

代码在这一行抛出错误:

self.analyzer.analyze(buffer, atAudioFramePosition: time.sampleTime)

属于这个代码块:

/// Starts the audio engine
private func startAudioEngine() {
    self.isLisitingForInferance = true

    //requests to use the engine
    do {
        let request = try SNClassifySoundRequest(mlModel: soundClassifier.model)
        try analyzer.add(request, withObserver: resultsObserver) // sets the results observator
    } catch {
        print("Unable to prepare request: \(error.localizedDescription)")
        return
    }
    //starts a async task for the analyser
    audioEngine.inputNode.installTap(onBus: 0, bufferSize: 16000, format: inputFormat) { buffer, time in
        self.analysisQueue.async {
            self.analyzer.analyze(buffer, atAudioFramePosition: time.sampleTime) //this line recives a SIGABRT
        }
    }

    do{
        try audioEngine.start()
    }catch( _){
        print("error in starting the Audio Engine")
    }
}

这是 class 观察(虽然它甚至没有被触发:

class ResultsObserver: NSObject, SNResultsObserving {
    var delegate: iPhoneSpeakerRecongitionDelegate?
    func request(_ request: SNRequest, didProduce result: SNResult) {
        guard let result = result as? SNClassificationResult,
            let classification = result.classifications.first else { return }
        //print("here")
        let confidence = classification.confidence * 100.0
        //print(classification.)


        if confidence > 60 {
            delegate?.displayPredictionResult(identifier: classification.identifier, confidence: confidence)
        }
    }
}

设法将其变为 return 一个不同的错误(这是模型不兼容的地方)

要解决此问题,您必须手动将模型文件移动到应用程序目录中,然后将其添加到 xcode - 这似乎是 Xcode 中的一个错误,将存储在应用程序包中的另一个目录