如何在 init 中将参数从 Objective C 传递到 Swift
how to pass parameters from Objective C to Swift within init
在 Apple 示例代码中,iOS节拍器,https://developer.apple.com/library/content/samplecode/HelloMetronome/Introduction/Intro.html#//apple_ref/doc/uid/TP40017587
现在 Apple 硬编码 self.setTempo(120),在以下代码末尾加上 120。
override init() {
super.init()
// Use two triangle waves which are generate for the metronome bips.
// Create a standard audio format deinterleaved float.
let format = AVAudioFormat(standardFormatWithSampleRate: 44100.0, channels: 2)
// How many audio frames?
let bipFrames: UInt32 = UInt32(GlobalConstants.kBipDurationSeconds * Float(format.sampleRate))
// Create the PCM buffers.
soundBuffer.append(AVAudioPCMBuffer(pcmFormat: format, frameCapacity: bipFrames))
soundBuffer.append(AVAudioPCMBuffer(pcmFormat: format, frameCapacity: bipFrames))
// Fill in the number of valid sample frames in the buffers (required).
soundBuffer[0]?.frameLength = bipFrames
soundBuffer[1]?.frameLength = bipFrames
// Generate the metronme bips, first buffer will be A440 and the second buffer Middle C.
let wg1 = TriangleWaveGenerator(sampleRate: Float(format.sampleRate)) // A 440
let wg2 = TriangleWaveGenerator(sampleRate: Float(format.sampleRate), frequency: 261.6) // Middle C
wg1.render(soundBuffer[0]!)
wg2.render(soundBuffer[1]!)
// Connect player -> output, with the format of the buffers we're playing.
let output: AVAudioOutputNode = engine.outputNode
engine.attach(player)
engine.connect(player, to: output, fromBus: 0, toBus: 0, format: format)
bufferSampleRate = format.sampleRate
// Create a serial dispatch queue for synchronizing callbacks.
syncQueue = DispatchQueue(label: "Metronome")
self.setTempo(120)
}
如何传递参数,而不是用 120 硬编码,从 Objective C 的以下代码到 init:Swift 上面的代码:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
NSLog(@"Hello, Metronome!\n");
NSError *error = nil;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryAmbient error:&error];
if (error) {
NSLog(@"AVAudioSession error %ld, %@", error.code, error.localizedDescription);
}
[audioSession setActive:YES error:&error];
if (error) {
NSLog(@"AVAudioSession error %ld, %@", error.code, error.localizedDescription);
}
// if media services are reset, we need to rebuild our audio chain
[[NSNotificationCenter defaultCenter] addObserver:self
selector:@selector(handleMediaServicesWereReset:)
name:AVAudioSessionMediaServicesWereResetNotification
object:audioSession];
metronome = [[Metronome alloc] init];
metronome.delegate = self;
}
非常感谢!
要向 Swift 方法添加参数,请更改
override init() {
...
self.setTempo(120)
类似于
init(frequency: Int) {
...
self.setTempo(frequency)
这将允许您将 Objective-C init 调用为
[[Metronome alloc] initWithFrequency: (your frequency)];
关于你的声音问题,如果没有更多关于你正在尝试做的事情的上下文,就不清楚发生了什么,但我会尝试将你的初始化代码从 viewDidLoad 移动到 viewDidAppear。
在 Apple 示例代码中,iOS节拍器,https://developer.apple.com/library/content/samplecode/HelloMetronome/Introduction/Intro.html#//apple_ref/doc/uid/TP40017587
现在 Apple 硬编码 self.setTempo(120),在以下代码末尾加上 120。
override init() {
super.init()
// Use two triangle waves which are generate for the metronome bips.
// Create a standard audio format deinterleaved float.
let format = AVAudioFormat(standardFormatWithSampleRate: 44100.0, channels: 2)
// How many audio frames?
let bipFrames: UInt32 = UInt32(GlobalConstants.kBipDurationSeconds * Float(format.sampleRate))
// Create the PCM buffers.
soundBuffer.append(AVAudioPCMBuffer(pcmFormat: format, frameCapacity: bipFrames))
soundBuffer.append(AVAudioPCMBuffer(pcmFormat: format, frameCapacity: bipFrames))
// Fill in the number of valid sample frames in the buffers (required).
soundBuffer[0]?.frameLength = bipFrames
soundBuffer[1]?.frameLength = bipFrames
// Generate the metronme bips, first buffer will be A440 and the second buffer Middle C.
let wg1 = TriangleWaveGenerator(sampleRate: Float(format.sampleRate)) // A 440
let wg2 = TriangleWaveGenerator(sampleRate: Float(format.sampleRate), frequency: 261.6) // Middle C
wg1.render(soundBuffer[0]!)
wg2.render(soundBuffer[1]!)
// Connect player -> output, with the format of the buffers we're playing.
let output: AVAudioOutputNode = engine.outputNode
engine.attach(player)
engine.connect(player, to: output, fromBus: 0, toBus: 0, format: format)
bufferSampleRate = format.sampleRate
// Create a serial dispatch queue for synchronizing callbacks.
syncQueue = DispatchQueue(label: "Metronome")
self.setTempo(120)
}
如何传递参数,而不是用 120 硬编码,从 Objective C 的以下代码到 init:Swift 上面的代码:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
NSLog(@"Hello, Metronome!\n");
NSError *error = nil;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryAmbient error:&error];
if (error) {
NSLog(@"AVAudioSession error %ld, %@", error.code, error.localizedDescription);
}
[audioSession setActive:YES error:&error];
if (error) {
NSLog(@"AVAudioSession error %ld, %@", error.code, error.localizedDescription);
}
// if media services are reset, we need to rebuild our audio chain
[[NSNotificationCenter defaultCenter] addObserver:self
selector:@selector(handleMediaServicesWereReset:)
name:AVAudioSessionMediaServicesWereResetNotification
object:audioSession];
metronome = [[Metronome alloc] init];
metronome.delegate = self;
}
非常感谢!
要向 Swift 方法添加参数,请更改
override init() {
...
self.setTempo(120)
类似于
init(frequency: Int) {
...
self.setTempo(frequency)
这将允许您将 Objective-C init 调用为
[[Metronome alloc] initWithFrequency: (your frequency)];
关于你的声音问题,如果没有更多关于你正在尝试做的事情的上下文,就不清楚发生了什么,但我会尝试将你的初始化代码从 viewDidLoad 移动到 viewDidAppear。