Microsoft.CognitiveServices.Speech 和长文件中 CreateSpeechRecognizerWithFileInput 的用法
Usage for CreateSpeechRecognizerWithFileInput in Microsoft.CognitiveServices.Speech and long files
在语音示例应用程序中,有一个 CreateSpeechRecognizerWithFileInput 示例,但是 returns 在第一句话之后。我确实注意到您可以多次调用 RecognizeAsync,但它有一些奇怪的行为:
- 我收到 RecognitionErrorRaised 并在文件中间出现 "NoMatch" 错误。
- 如果文件中有一段静默期,FinalResultsReceived 将被触发,结果为空。
- 似乎没有 consistent/trackable 识别完成的 EOF 事件。
如果我想转录一个 20 分钟的音频文件,使用统一语音 SDK 是否有更好的方法?同样的文件在旧的 Oxford 包下是可以的。理想情况下,我希望能够获得话语和转录的时间偏移。
您可以使用StartContinuousRecognitionAsync ();
and StopContinuousRecognitionAsync ();
与SDK一起启动和停止识别。
这是一个sample:
using System;
using System.Threading.Tasks;
using Microsoft.CognitiveServices.Speech;
namespace MicrosoftSpeechSDKSamples
{
public class SpeechRecognitionSamples
{
// Speech recognition from microphone.
public static async Task RecognitionWithMicrophoneAsync()
{
// <recognitionWithMicrophone>
// Creates an instance of a speech factory with specified
// subscription key and service region. Replace with your own subscription key
// and service region (e.g., "westus").
var factory = SpeechFactory.FromSubscription("59a0243e86ae4919aa26f9e839f34b28", "westus");
// Creates a speech recognizer using microphone as audio input. The default language is "en-us".
using (var recognizer = factory.CreateSpeechRecognizer())
{
// Starts recognizing.
Console.WriteLine("Say something...");
// Starts recognition. It returns when the first utterance has been recognized.
var result = await recognizer.RecognizeAsync().ConfigureAwait(false);
// Checks result.
if (result.RecognitionStatus != RecognitionStatus.Recognized)
{
Console.WriteLine($"There was an error. Status:{result.RecognitionStatus.ToString()}, Reason:{result.RecognitionFailureReason}");
}
else
{
Console.WriteLine($"We recognized: {result.RecognizedText}");
}
}
// </recognitionWithMicrophone>
}
// Speech recognition in the specified spoken language.
public static async Task RecognitionWithLanguageAsync()
{
// <recognitionWithLanguage>
// Creates an instance of a speech factory with specified
// subscription key and service region. Replace with your own subscription key
// and service region (e.g., "westus").
var factory = SpeechFactory.FromSubscription("59a0243e86ae4919aa26f9e839f34b28", "westus");
// Creates a speech recognizer for the specified language, using microphone as audio input.
var lang = "en-us";
using (var recognizer = factory.CreateSpeechRecognizer(lang))
{
// Starts recognizing.
Console.WriteLine($"Say something in {lang} ...");
// Starts recognition. It returns when the first utterance has been recognized.
var result = await recognizer.RecognizeAsync().ConfigureAwait(false);
// Checks result.
if (result.RecognitionStatus != RecognitionStatus.Recognized)
{
Console.WriteLine($"There was an error. Status:{result.RecognitionStatus.ToString()}, Reason:{result.RecognitionFailureReason}");
}
else
{
Console.WriteLine($"We recognized: {result.RecognizedText}");
}
}
// </recognitionWithLanguage>
}
// Speech recognition from file.
public static async Task RecognitionWithFileAsync()
{
// <recognitionFromFile>
// Creates an instance of a speech factory with specified
// subscription key and service region. Replace with your own subscription key
// and service region (e.g., "westus").
var factory = SpeechFactory.FromSubscription("59a0243e86ae4919aa26f9e839f34b28", "westus");
// Creates a speech recognizer using file as audio input.
// Replace with your own audio file name.
using (var recognizer = factory.CreateSpeechRecognizerWithFileInput(@"YourAudioFile.wav"))
{
// Starts recognition. It returns when the first utterance is recognized.
var result = await recognizer.RecognizeAsync().ConfigureAwait(false);
// Checks result.
if (result.RecognitionStatus != RecognitionStatus.Recognized)
{
Console.WriteLine($"There was an error. Status:{result.RecognitionStatus.ToString()}, Reason:{result.RecognitionFailureReason}");
}
else
{
Console.WriteLine($"We recognized: {result.RecognizedText}");
}
}
// </recognitionFromFile>
}
// <recognitionCustomized>
// Speech recognition using a customized model.
public static async Task RecognitionUsingCustomizedModelAsync()
{
// Creates an instance of a speech factory with specified
// subscription key and service region. Replace with your own subscription key
// and service region (e.g., "westus").
var factory = SpeechFactory.FromSubscription("59a0243e86ae4919aa26f9e839f34b28", "westus");
// Creates a speech recognizer using microphone as audio input.
using (var recognizer = factory.CreateSpeechRecognizer())
{
// Replace with the CRIS deployment id of your customized model.
recognizer.DeploymentId = "YourDeploymentId";
Console.WriteLine("Say something...");
// Starts recognition. It returns when the first utterance has been recognized.
var result = await recognizer.RecognizeAsync().ConfigureAwait(false);
// Checks results.
if (result.RecognitionStatus != RecognitionStatus.Recognized)
{
Console.WriteLine($"There was an error. Status:{result.RecognitionStatus.ToString()}, Reason:{result.RecognitionFailureReason}");
}
else
{
Console.WriteLine($"We recognized: {result.RecognizedText}");
}
}
}
// </recognitionCustomized>
// <recognitionContinuous>
// Speech recognition with events
public static async Task ContinuousRecognitionAsync()
{
// Creates an instance of a speech factory with specified
// subscription key and service region. Replace with your own subscription key
// and service region (e.g., "westus").
var factory = SpeechFactory.FromSubscription("59a0243e86ae4919aa26f9e839f34b28", "westus");
// Creates a speech recognizer using microphone as audio input.
using (var recognizer = factory.CreateSpeechRecognizer())
{
// Subscribes to events.
recognizer.IntermediateResultReceived += (s, e) => {
Console.WriteLine($"\n Partial result: {e.Result.RecognizedText}.");
};
recognizer.FinalResultReceived += (s, e) => {
if (e.Result.RecognitionStatus == RecognitionStatus.Recognized)
{
Console.WriteLine($"\n Final result: Status: {e.Result.RecognitionStatus.ToString()}, Text: {e.Result.RecognizedText}.");
}
else
{
Console.WriteLine($"\n Final result: Status: {e.Result.RecognitionStatus.ToString()}, FailureReason: {e.Result.RecognitionFailureReason}.");
}
};
recognizer.RecognitionErrorRaised += (s, e) => {
Console.WriteLine($"\n An error occurred. Status: {e.Status.ToString()}, FailureReason: {e.FailureReason}");
};
recognizer.OnSessionEvent += (s, e) => {
Console.WriteLine($"\n Session event. Event: {e.EventType.ToString()}.");
};
// Starts continuos recognition. Uses StopContinuousRecognitionAsync() to stop recognition.
Console.WriteLine("Say something...");
await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);
Console.WriteLine("Press any key to stop");
Console.ReadKey();
await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false);
}
}
// </recognitionContinuous>
}
}
如果您有大量音频,使用 Batch Transcription 也是个好主意。
更新
最近删除了 SpeechFactory
class 作为重大更改并替换为 SpeechConfig
as of Speech SDK 1.0.0 as per the documentation。
The SpeechFactory
class is removed. Instead, the class SpeechConfig
is
introduced to describe various settings of speech configuration and
the class AudioConfig
to describe different audio sources (microphone,
file, or stream input). To create a SpeechRecognizer
, use one of its
constructors with SpeechConfig and AudioConfig as parameters.
下面的 C# 代码显示了如何使用默认麦克风输入创建 SpeechRecognizer。
// Creates an instance of speech config with specified subscription key and service region.
var config = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
// Creates a speech recognizer using microphone as audio input.
using (var recognizer = new SpeechRecognizer(config))
{
// Performs recognition.
var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
// Process result.
// ...
}
The same concept is applied for creating IntentRecognizer
and
TranslationRecognizer
, except that SpeechTranslationConfig
is
required for creating TranslationRecognizer
.
CreateSpeechRecognizerWithFileInput()
替换为 AudioConfig
。以下 C# 代码显示了如何使用文件输入创建语音识别器。
// Creates an instance of speech config with specified subscription key and service region.
var config = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
// Creates a speech recognizer using file as audio input.
// Replace with your own audio file name.
using (var audioInput = AudioConfig.FromWavFileInput(@"whatstheweatherlike.wav"))
{
using (var recognizer = new SpeechRecognizer(config, audioInput))
{
// Performs recognition.
var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
// Process result.
// ...
}
}
正如 Ali 所说,如果您想识别多个话语,StartContinuousRecognitionAsync() 和 StopContinousRecognitionAsync() 是正确的方法。
SpeechSDK 的最新示例可用 https://github.com/Azure-Samples/cognitive-services-speech-sdk,其中包括不同平台(当前 Windows/Linux)上不同语言(当前为 C++/C#,如果支持新语言将添加)的示例并将添加更多平台)。
关于问题3),SessionStopped事件用于检测EOF。您可以在此处找到样本:https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/Windows/csharp_samples/speech_recognition_samples.cs#L194。
谢谢,
在语音示例应用程序中,有一个 CreateSpeechRecognizerWithFileInput 示例,但是 returns 在第一句话之后。我确实注意到您可以多次调用 RecognizeAsync,但它有一些奇怪的行为:
- 我收到 RecognitionErrorRaised 并在文件中间出现 "NoMatch" 错误。
- 如果文件中有一段静默期,FinalResultsReceived 将被触发,结果为空。
- 似乎没有 consistent/trackable 识别完成的 EOF 事件。
如果我想转录一个 20 分钟的音频文件,使用统一语音 SDK 是否有更好的方法?同样的文件在旧的 Oxford 包下是可以的。理想情况下,我希望能够获得话语和转录的时间偏移。
您可以使用StartContinuousRecognitionAsync ();
and StopContinuousRecognitionAsync ();
与SDK一起启动和停止识别。
这是一个sample:
using System;
using System.Threading.Tasks;
using Microsoft.CognitiveServices.Speech;
namespace MicrosoftSpeechSDKSamples
{
public class SpeechRecognitionSamples
{
// Speech recognition from microphone.
public static async Task RecognitionWithMicrophoneAsync()
{
// <recognitionWithMicrophone>
// Creates an instance of a speech factory with specified
// subscription key and service region. Replace with your own subscription key
// and service region (e.g., "westus").
var factory = SpeechFactory.FromSubscription("59a0243e86ae4919aa26f9e839f34b28", "westus");
// Creates a speech recognizer using microphone as audio input. The default language is "en-us".
using (var recognizer = factory.CreateSpeechRecognizer())
{
// Starts recognizing.
Console.WriteLine("Say something...");
// Starts recognition. It returns when the first utterance has been recognized.
var result = await recognizer.RecognizeAsync().ConfigureAwait(false);
// Checks result.
if (result.RecognitionStatus != RecognitionStatus.Recognized)
{
Console.WriteLine($"There was an error. Status:{result.RecognitionStatus.ToString()}, Reason:{result.RecognitionFailureReason}");
}
else
{
Console.WriteLine($"We recognized: {result.RecognizedText}");
}
}
// </recognitionWithMicrophone>
}
// Speech recognition in the specified spoken language.
public static async Task RecognitionWithLanguageAsync()
{
// <recognitionWithLanguage>
// Creates an instance of a speech factory with specified
// subscription key and service region. Replace with your own subscription key
// and service region (e.g., "westus").
var factory = SpeechFactory.FromSubscription("59a0243e86ae4919aa26f9e839f34b28", "westus");
// Creates a speech recognizer for the specified language, using microphone as audio input.
var lang = "en-us";
using (var recognizer = factory.CreateSpeechRecognizer(lang))
{
// Starts recognizing.
Console.WriteLine($"Say something in {lang} ...");
// Starts recognition. It returns when the first utterance has been recognized.
var result = await recognizer.RecognizeAsync().ConfigureAwait(false);
// Checks result.
if (result.RecognitionStatus != RecognitionStatus.Recognized)
{
Console.WriteLine($"There was an error. Status:{result.RecognitionStatus.ToString()}, Reason:{result.RecognitionFailureReason}");
}
else
{
Console.WriteLine($"We recognized: {result.RecognizedText}");
}
}
// </recognitionWithLanguage>
}
// Speech recognition from file.
public static async Task RecognitionWithFileAsync()
{
// <recognitionFromFile>
// Creates an instance of a speech factory with specified
// subscription key and service region. Replace with your own subscription key
// and service region (e.g., "westus").
var factory = SpeechFactory.FromSubscription("59a0243e86ae4919aa26f9e839f34b28", "westus");
// Creates a speech recognizer using file as audio input.
// Replace with your own audio file name.
using (var recognizer = factory.CreateSpeechRecognizerWithFileInput(@"YourAudioFile.wav"))
{
// Starts recognition. It returns when the first utterance is recognized.
var result = await recognizer.RecognizeAsync().ConfigureAwait(false);
// Checks result.
if (result.RecognitionStatus != RecognitionStatus.Recognized)
{
Console.WriteLine($"There was an error. Status:{result.RecognitionStatus.ToString()}, Reason:{result.RecognitionFailureReason}");
}
else
{
Console.WriteLine($"We recognized: {result.RecognizedText}");
}
}
// </recognitionFromFile>
}
// <recognitionCustomized>
// Speech recognition using a customized model.
public static async Task RecognitionUsingCustomizedModelAsync()
{
// Creates an instance of a speech factory with specified
// subscription key and service region. Replace with your own subscription key
// and service region (e.g., "westus").
var factory = SpeechFactory.FromSubscription("59a0243e86ae4919aa26f9e839f34b28", "westus");
// Creates a speech recognizer using microphone as audio input.
using (var recognizer = factory.CreateSpeechRecognizer())
{
// Replace with the CRIS deployment id of your customized model.
recognizer.DeploymentId = "YourDeploymentId";
Console.WriteLine("Say something...");
// Starts recognition. It returns when the first utterance has been recognized.
var result = await recognizer.RecognizeAsync().ConfigureAwait(false);
// Checks results.
if (result.RecognitionStatus != RecognitionStatus.Recognized)
{
Console.WriteLine($"There was an error. Status:{result.RecognitionStatus.ToString()}, Reason:{result.RecognitionFailureReason}");
}
else
{
Console.WriteLine($"We recognized: {result.RecognizedText}");
}
}
}
// </recognitionCustomized>
// <recognitionContinuous>
// Speech recognition with events
public static async Task ContinuousRecognitionAsync()
{
// Creates an instance of a speech factory with specified
// subscription key and service region. Replace with your own subscription key
// and service region (e.g., "westus").
var factory = SpeechFactory.FromSubscription("59a0243e86ae4919aa26f9e839f34b28", "westus");
// Creates a speech recognizer using microphone as audio input.
using (var recognizer = factory.CreateSpeechRecognizer())
{
// Subscribes to events.
recognizer.IntermediateResultReceived += (s, e) => {
Console.WriteLine($"\n Partial result: {e.Result.RecognizedText}.");
};
recognizer.FinalResultReceived += (s, e) => {
if (e.Result.RecognitionStatus == RecognitionStatus.Recognized)
{
Console.WriteLine($"\n Final result: Status: {e.Result.RecognitionStatus.ToString()}, Text: {e.Result.RecognizedText}.");
}
else
{
Console.WriteLine($"\n Final result: Status: {e.Result.RecognitionStatus.ToString()}, FailureReason: {e.Result.RecognitionFailureReason}.");
}
};
recognizer.RecognitionErrorRaised += (s, e) => {
Console.WriteLine($"\n An error occurred. Status: {e.Status.ToString()}, FailureReason: {e.FailureReason}");
};
recognizer.OnSessionEvent += (s, e) => {
Console.WriteLine($"\n Session event. Event: {e.EventType.ToString()}.");
};
// Starts continuos recognition. Uses StopContinuousRecognitionAsync() to stop recognition.
Console.WriteLine("Say something...");
await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);
Console.WriteLine("Press any key to stop");
Console.ReadKey();
await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false);
}
}
// </recognitionContinuous>
}
}
如果您有大量音频,使用 Batch Transcription 也是个好主意。
更新
最近删除了 SpeechFactory
class 作为重大更改并替换为 SpeechConfig
as of Speech SDK 1.0.0 as per the documentation。
The
SpeechFactory
class is removed. Instead, the classSpeechConfig
is introduced to describe various settings of speech configuration and the classAudioConfig
to describe different audio sources (microphone, file, or stream input). To create aSpeechRecognizer
, use one of its constructors with SpeechConfig and AudioConfig as parameters.
下面的 C# 代码显示了如何使用默认麦克风输入创建 SpeechRecognizer。
// Creates an instance of speech config with specified subscription key and service region.
var config = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
// Creates a speech recognizer using microphone as audio input.
using (var recognizer = new SpeechRecognizer(config))
{
// Performs recognition.
var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
// Process result.
// ...
}
The same concept is applied for creating
IntentRecognizer
andTranslationRecognizer
, except thatSpeechTranslationConfig
is required for creatingTranslationRecognizer
.
CreateSpeechRecognizerWithFileInput()
替换为 AudioConfig
。以下 C# 代码显示了如何使用文件输入创建语音识别器。
// Creates an instance of speech config with specified subscription key and service region.
var config = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
// Creates a speech recognizer using file as audio input.
// Replace with your own audio file name.
using (var audioInput = AudioConfig.FromWavFileInput(@"whatstheweatherlike.wav"))
{
using (var recognizer = new SpeechRecognizer(config, audioInput))
{
// Performs recognition.
var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
// Process result.
// ...
}
}
正如 Ali 所说,如果您想识别多个话语,StartContinuousRecognitionAsync() 和 StopContinousRecognitionAsync() 是正确的方法。
SpeechSDK 的最新示例可用 https://github.com/Azure-Samples/cognitive-services-speech-sdk,其中包括不同平台(当前 Windows/Linux)上不同语言(当前为 C++/C#,如果支持新语言将添加)的示例并将添加更多平台)。
关于问题3),SessionStopped事件用于检测EOF。您可以在此处找到样本:https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/Windows/csharp_samples/speech_recognition_samples.cs#L194。
谢谢,