在 iOS 上录制实时音频流
Recording live Audio Streams on iOS
//Declare string for application temp path and tack on the file extension
string fileName = string.Format ("Myfile{0}.wav", DateTime.Now.ToString ("yyyyMMddHHmmss"));
string audioFilePath = Path.Combine (Path.GetTempPath (), fileName);
Console.WriteLine("Audio File Path: " + audioFilePath);
url = NSUrl.FromFilename(audioFilePath);
//set up the NSObject Array of values that will be combined with the keys to make the NSDictionary
NSObject[] values = new NSObject[]
{
NSNumber.FromFloat (44100.0f), //Sample Rate
NSNumber.FromInt32 ((int)AudioToolbox.AudioFormatType.LinearPCM), //AVFormat
NSNumber.FromInt32 (2), //Channels
NSNumber.FromInt32 (16), //PCMBitDepth
NSNumber.FromBoolean (false), //IsBigEndianKey
NSNumber.FromBoolean (false) //IsFloatKey
};
//Set up the NSObject Array of keys that will be combined with the values to make the NSDictionary
NSObject[] keys = new NSObject[]
{
AVAudioSettings.AVSampleRateKey,
AVAudioSettings.AVFormatIDKey,
AVAudioSettings.AVNumberOfChannelsKey,
AVAudioSettings.AVLinearPCMBitDepthKey,
AVAudioSettings.AVLinearPCMIsBigEndianKey,
AVAudioSettings.AVLinearPCMIsFloatKey
};
//Set Settings with the Values and Keys to create the NSDictionary
settings = NSDictionary.FromObjectsAndKeys (values, keys);
//Set recorder parameters
recorder = AVAudioRecorder.Create(url, new AudioSettings(settings), out error);
//Set Recorder to Prepare To Record
recorder.PrepareToRecord();
This 代码运行良好,但如何从麦克风中直接保存录音以进行流式传输?
我在网上没有找到任何资料,希望大家能帮帮我
您正在寻找对音频流(录制或播放)的缓冲访问,iOS 通过 Audio Queue Services
提供它(AVAudioRecorder
级别太高),然后作为音频缓冲区已满,iOS 使用队列中的已满缓冲区调用您的 callback
,您对其进行一些操作(将其保存到磁盘,将其写入基于 C# 的流,发送到播放音频队列[speakers], etc...),并且通常将其放回队列中以供重复使用。
这样的事情开始录制到 queue
个音频缓冲区:
var recordFormat = new AudioStreamBasicDescription() {
SampleRate = 8000,
Format = AudioFormatType.LinearPCM,
FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked,
FramesPerPacket = 1,
ChannelsPerFrame = 1,
BitsPerChannel = 16,
BytesPerPacket = 2,
BytesPerFrame = 2,
Reserved = 0
};
recorder = new InputAudioQueue (recordFormat);
for (int count = 0; count < BufferCount; count++) {
IntPtr bufferPointer;
recorder.AllocateBuffer(AudioBufferSize, out bufferPointer);
recorder.EnqueueBuffer(bufferPointer, AudioBufferSize, null);
}
recorder.InputCompleted += HandleInputCompleted;
recorder.Start ();
所以在这个例子中假设 AudioBufferSize
为 8k,BufferCount
为 3,所以一旦三个缓冲区中的第一个被填满,我们的处理程序 HandleInputCompleted
就会被调用(因为那里queue
中仍有 2 个缓冲区继续记录。
我们的 InputCompleted
处理程序:
private void HandleInputCompleted (object sender, InputCompletedEventArgs e)
{
// We received a new buffer of audio, do something with it....
// Some unsafe code will be required to rip the buffer...
// Place the buffer back into the queue so iOS knows you are done with it
recorder.EnqueueBuffer(e.IntPtrBuffer, AudioBufferSize, null);
// At some point you need to call `recorder.Stop();` ;-)
}
(我从处理程序中删除了我们的代码,因为它是一个自定义的 audio-2-text 学习中性网络,因为我们在非常大的队列中使用非常小的缓冲区来减少反馈延迟并在单个 TCP/UDP 用于云处理的数据包(认为 Siri
;-)
在此处理程序中,您可以访问 Pointer
当前通过 InputCompletedEventArgs.IntPtrBuffer
填充的缓冲区,使用该指针,您可以 peek
缓冲区中的每个字节和 poke
如果这是你的目标,他们将加入你基于 C# 的 Stream。
Apple 有一篇关于音频队列的很棒的技术文章:https://developer.apple.com/library/ios/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AboutAudioQueues/AboutAudioQueues.html
//Declare string for application temp path and tack on the file extension
string fileName = string.Format ("Myfile{0}.wav", DateTime.Now.ToString ("yyyyMMddHHmmss"));
string audioFilePath = Path.Combine (Path.GetTempPath (), fileName);
Console.WriteLine("Audio File Path: " + audioFilePath);
url = NSUrl.FromFilename(audioFilePath);
//set up the NSObject Array of values that will be combined with the keys to make the NSDictionary
NSObject[] values = new NSObject[]
{
NSNumber.FromFloat (44100.0f), //Sample Rate
NSNumber.FromInt32 ((int)AudioToolbox.AudioFormatType.LinearPCM), //AVFormat
NSNumber.FromInt32 (2), //Channels
NSNumber.FromInt32 (16), //PCMBitDepth
NSNumber.FromBoolean (false), //IsBigEndianKey
NSNumber.FromBoolean (false) //IsFloatKey
};
//Set up the NSObject Array of keys that will be combined with the values to make the NSDictionary
NSObject[] keys = new NSObject[]
{
AVAudioSettings.AVSampleRateKey,
AVAudioSettings.AVFormatIDKey,
AVAudioSettings.AVNumberOfChannelsKey,
AVAudioSettings.AVLinearPCMBitDepthKey,
AVAudioSettings.AVLinearPCMIsBigEndianKey,
AVAudioSettings.AVLinearPCMIsFloatKey
};
//Set Settings with the Values and Keys to create the NSDictionary
settings = NSDictionary.FromObjectsAndKeys (values, keys);
//Set recorder parameters
recorder = AVAudioRecorder.Create(url, new AudioSettings(settings), out error);
//Set Recorder to Prepare To Record
recorder.PrepareToRecord();
This 代码运行良好,但如何从麦克风中直接保存录音以进行流式传输? 我在网上没有找到任何资料,希望大家能帮帮我
您正在寻找对音频流(录制或播放)的缓冲访问,iOS 通过 Audio Queue Services
提供它(AVAudioRecorder
级别太高),然后作为音频缓冲区已满,iOS 使用队列中的已满缓冲区调用您的 callback
,您对其进行一些操作(将其保存到磁盘,将其写入基于 C# 的流,发送到播放音频队列[speakers], etc...),并且通常将其放回队列中以供重复使用。
这样的事情开始录制到 queue
个音频缓冲区:
var recordFormat = new AudioStreamBasicDescription() {
SampleRate = 8000,
Format = AudioFormatType.LinearPCM,
FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked,
FramesPerPacket = 1,
ChannelsPerFrame = 1,
BitsPerChannel = 16,
BytesPerPacket = 2,
BytesPerFrame = 2,
Reserved = 0
};
recorder = new InputAudioQueue (recordFormat);
for (int count = 0; count < BufferCount; count++) {
IntPtr bufferPointer;
recorder.AllocateBuffer(AudioBufferSize, out bufferPointer);
recorder.EnqueueBuffer(bufferPointer, AudioBufferSize, null);
}
recorder.InputCompleted += HandleInputCompleted;
recorder.Start ();
所以在这个例子中假设 AudioBufferSize
为 8k,BufferCount
为 3,所以一旦三个缓冲区中的第一个被填满,我们的处理程序 HandleInputCompleted
就会被调用(因为那里queue
中仍有 2 个缓冲区继续记录。
我们的 InputCompleted
处理程序:
private void HandleInputCompleted (object sender, InputCompletedEventArgs e)
{
// We received a new buffer of audio, do something with it....
// Some unsafe code will be required to rip the buffer...
// Place the buffer back into the queue so iOS knows you are done with it
recorder.EnqueueBuffer(e.IntPtrBuffer, AudioBufferSize, null);
// At some point you need to call `recorder.Stop();` ;-)
}
(我从处理程序中删除了我们的代码,因为它是一个自定义的 audio-2-text 学习中性网络,因为我们在非常大的队列中使用非常小的缓冲区来减少反馈延迟并在单个 TCP/UDP 用于云处理的数据包(认为 Siri
;-)
在此处理程序中,您可以访问 Pointer
当前通过 InputCompletedEventArgs.IntPtrBuffer
填充的缓冲区,使用该指针,您可以 peek
缓冲区中的每个字节和 poke
如果这是你的目标,他们将加入你基于 C# 的 Stream。