语音识别器中的异步等待

async wait in Speech recognizer

我正在尝试通过以下更改复制 this 示例:

  1. 使用控制台应用程序而不是 Windows:这看起来没问题,因为计算机正在跟我说话

  2. 使用 Sync 功能:看起来我弄错了。

更新 程序执行后,它会跟我说话,并等待按键被按下,之后它等待 'listening' 但 sre_SpeechRecognized 没有执行。

下面是我的代码,谢谢:

using System;
using System.Threading.Tasks;
using System.Speech.Synthesis;
using System.Speech.Recognition;

class Startup {
        // Create a simple handler for the SpeechRecognized event
    static void sre_SpeechRecognized (object sender, SpeechRecognizedEventArgs e)
    {
        string speech = e.Result.Text;

                //handle custom commands
        switch (speech)
        {
            case "red":
                Console.WriteLine("Hello");
                break;
            case "green":
                System.Diagnostics.Process.Start("Notepad");
                break;
            case "blue":
                Console.WriteLine("You said blue");
                break;
            case "Close":
               Console.WriteLine("Speech recognized: {0}", e.Result.Text);
                break;
        }
        Console.WriteLine("Speech recognized: {0}", e.Result.Text);
    }

public async Task<object> Invoke(dynamic i) {
// Initialize a new instance of the SpeechSynthesizer.
        SpeechSynthesizer synth = new SpeechSynthesizer();

        // Configure the audio output. 
        synth.SetOutputToDefaultAudioDevice();

        // Speak a string.
        synth.Speak("This example demonstrates a basic use of Speech Synthesizer");

        Console.WriteLine();
        Console.WriteLine("Press any key to exit...");
        Console.ReadKey();

        // Create a new SpeechRecognitionEngine instance.
        SpeechRecognizer recognizer = new SpeechRecognizer();

        // Create a simple grammar that recognizes "red", "green", or "blue".
        Choices colors = new Choices();
        colors.Add(new string[] { "red", "green", "blue" });

        // Create a GrammarBuilder object and append the Choices object.
        GrammarBuilder gb = new GrammarBuilder();
        gb.Append(colors);

        // Create the Grammar instance and load it into the speech recognition engine.
        Grammar g = new Grammar(gb);
        recognizer.LoadGrammar(g);

        // Register a handler for the SpeechRecognized event.
        recognizer.SpeechRecognized += 
          new EventHandler<SpeechRecognizedEventArgs> (Startup.sre_SpeechRecognized);
       Console.WriteLine("Exiting now..");
   return null;
}
}

你没有启动识别。请查看您发布的 link。在示例中注册事件后,有一行 sre.Recognize(); (您的代码中缺少该行)。还有一个方法RecognizeAsync()提到了,这可能是你想要的。

修改Invoke方法如下(这是Async调用者(此处为Node Js)等待Synchronous事件完成的典型情况)

重要细节(请注意此修改的基础是语音引擎按预期工作)

  1. 将 Invoke 方法改为 Sync,而不是 Async,因为原始代码中没有 Async 调用
  2. 将 return 值替换为任务以获取事件 return 值
  3. 使事件内联以便于使用对象
  4. 最后添加Recognize同步方法
  5. Will return 当 Task 完成 post 事件触发并将结果包含在 Task<string> 中,可以使用 TaskObject.Result 属性

      public async Task<object> Invoke(dynamic i) {    // async here is required to be used by Edge.JS that is a node.js module enable communicating with C# files
      var tcs = new TaskCompletionSource<object>();
      // Initialize a new instance of the SpeechSynthesizer.
        SpeechSynthesizer synth = new SpeechSynthesizer();
    
        // Configure the audio output. 
        synth.SetOutputToDefaultAudioDevice();
    
        // Speak a string.
        synth.Speak("This example demonstrates a basic use of Speech Synthesizer");
    
        Console.WriteLine();
        Console.WriteLine("Press any key to exit...");
        Console.ReadKey();
    
        // Create a new SpeechRecognitionEngine instance.
    
        SpeechRecognitionEngine recognizer = new SpeechRecognitionEngine();
    
        recognizer.SetInputToDefaultAudioDevice();
    
        // Create a simple grammar that recognizes "red", "green", or "blue".
        Choices colors = new Choices();
        colors.Add(new string[] { "red", "green", "blue" });
    
        // Create a GrammarBuilder object and append the Choices object.
        GrammarBuilder gb = new GrammarBuilder();
        gb.Append(colors);
    
        // Create the Grammar instance and load it into the speech recognition engine.
        Grammar g = new Grammar(gb);
        recognizer.LoadGrammar(g);
    
        // Register a handler for the SpeechRecognized event.
        recognizer.SpeechRecognized += (sender,e) => {
    
           string speech = e.Result.Text;
    
            //handle custom commands
            switch (speech)
            {
                case "red":
                 tcs.SetResult("Hello Red");
                break;
                case "green":
                 tcs.SetResult("Hello Green");
                break;
                case "blue":
                 tcs.SetResult("Hello Blue");
                 break;
                case "Close":
                 tcs.SetResult("Hello Close");
                break;
               default:
                 tcs.SetResult("Hello Not Sure");
              break;
    }
    
     };
    
       // For Edge JS we cannot await an Async Call (else it leads to error)
       recognizer.Recognize();              
       return tcs.Task.Result;
    
       //// For pure C#
       // await recognizer.RecognizeAsync();              
       // return tcs.Task;
    }
    

异步特定更改

  1. public async Task<object> Invoke(dynamic i)(制作方法 async 和 return 类型 Task,异步方法的要求)
  2. 调用await recognizer.RecognizeAsync();(调用等待异步调用)
  3. return tcs.Task(return类型需要是Task)