(三)Hololens Unity 开发之 语音识别

**学习源于官方文档 Voice input in Unity **

笔记一部分是直接翻译官方文档,部分各人理解不一致的和一些比较浅显的保留英文原文

(三)Hololens Unity 开发之 语音识别

HoloLens 有三大输入系统,凝视点、手势和声音 ~ 本文主要讲解 语音输入 ~ (测试不支持中文语音输入~)

一、概述

HoloToolKit Unity 包提供了三种 语音输入的方式 :

  • Phrase Recognition 短语识别
    * KeywordRecognizer 单一关键词识别
    * GrammarRecognizer 语法识别

  • Dictation Recognition 听写识别
    * DictationRecognizer 将声音识别转化为文字

Note: KeywordRecognizer 和 GrammarRecognizer 是主动活跃识别的方式~ 也就是说调用开始识别的方法,那么久处于活跃状态开始识别,而DictationRecognizer只要注册了就就在默默的监听语音输入,一旦监听到关键词那么久触发回调

二、Unity开发打开Microphone权限

下面是官方文档 讲解 如何打开microphone权限,直接上配图~

The Microphone capability must be declared for an app to leverage Voice input.

  1. In the Unity Editor, go to the player settings by navigating to "Edit > Project Settings > Player"
  2. Click on the "Windows Store" tab
  3. In the "Publishing Settings > Capabilities" section, check the Microphone capability

三、Phrase Recognition 短语识别

To enable your app to listen for specific phrases spoken by the user then take some action, you need to:

  1. Specify which phrases to listen for using a KeywordRecognizer or GrammarRecognizer
  2. Handle the OnPhraseRecognized event and take action corresponding to the phrase recognized

使用短语识别嘞~需要做两个步骤:

  1. 指定需要监听的 短语 或者 关键词
  2. 处理识别到 短语 或者 关键词 之后的事件回调 ~ OnPhraseRecognized

1、 关键词识别 (直接Demo代码~)

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Windows.Speech;
using System.Linq;

public class VoiceInputDemo : MonoBehaviour {

    public Material yellow;
    public Material red;
    public Material blue;
    public Material green;

    /// <summary>
    /// 关键词识别对象
    /// </summary>
    private KeywordRecognizer keywordRecognizer;

    /// <summary>
    /// 存放关键词的字典
    /// </summary>
    private Dictionary<string, System.Action> keywords = new Dictionary<string, System.Action>();
    // Use this for initialization
    void Start () {

        // 向字典中添加关键词,key为关键词, vallue为一个匿名action
        keywords.Add("yellow", () =>
        {
            Debug.Log("听到了 yellow");
            transform.GetComponent<MeshRenderer>().material = yellow;
        });

        keywords.Add("red", () =>
        {
            Debug.Log("听到了 red");
            transform.GetComponent<MeshRenderer>().material = red;
        });

        keywords.Add("green", () =>
        {
            Debug.Log("听到了 green");
            transform.GetComponent<MeshRenderer>().material = green;
        });

        keywords.Add("blue", () =>
        {
            Debug.Log("听到了 blue");
            transform.GetComponent<MeshRenderer>().material = blue;
        });

        // 初始化关键词识别对象
        keywordRecognizer = new KeywordRecognizer(keywords.Keys.ToArray());

        // 添加关键词代理事件
        keywordRecognizer.OnPhraseRecognized += KeywordRecognizer_OnPhraseRecognized;

        // 注意: 这方法一定要写,开始执行监听
        keywordRecognizer.Start();
    }

    private void KeywordRecognizer_OnPhraseRecognized(PhraseRecognizedEventArgs args)
    {

        System.Action keywordAction;
        // if the keyword recognized is in our dictionary, call that Action.
        // 如果关键字在我们的字典中被识别,调用该action。
        if (keywords.TryGetValue(args.text, out keywordAction))
        {
            Debug.Log("听到了,进入了事件方法  关键词语 : " + args.text.ToString());

           // 执行该action
            keywordAction.Invoke();
        }
    }

    // Update is called once per frame
    void Update () {

	}
}

2、 语法识别 GrammarRecognizer

按照官方文档上来说的 我得 创建一个 SRGS 的XML文件放在 StreamingAssets 文件夹下~不过我没有做到英文语法输入的需求 ~ 感兴趣的点击 https://msdn.microsoft.com/en-us/library/hh378349(v=office.14).aspx 自己查看官方文段对SRGS的讲解~

下面贴的一段官方文档的代码
Once you have your SRGS grammar, and it is in your project in a StreamingAssets folder:

<PROJECT_ROOT>/Assets/StreamingAssets/SRGS/myGrammar.xml

Create a GrammarRecognizer and pass it the path to your SRGS file:

private GrammarRecognizer grammarRecognizer;
grammarRecognizer = new GrammarRecognizer(Application.streamingDataPath + "/SRGS/myGrammar.xml");

Now register for the OnPhraseRecognized event

grammarRecognizer.OnPhraseRecognized += grammarRecognizer_OnPhraseRecognized;

You will get a callback containing information specified in your SRGS grammar which you can handle appropriately. Most of the important information will be provided in the semanticMeanings array.

private void Grammar_OnPhraseRecognized(PhraseRecognizedEventArgs args)
{
    SemanticMeaning[] meanings = args.semanticMeanings;
    // do something
}

Finally, start recognizing!

grammarRecognizer.Start();

四、听写

1、概述

DictationRecognizer 使用这个对象可以识别语音输入转化为文本,使用这个对象有三个步骤~

  1. 创建一个DictationRecognizer对象
  2. 注册Dictation 事件
  3. 开始识别听写

2、开启网络客户端权限

The "Internet Client" capability, in addition to the "Microphone" capability mentioned above, must be declared for an app to leverage dictation.

  1. In the Unity Editor, go to the player settings by navigating to "Edit > Project Settings > Player" page
  2. Click on the "Windows Store" tab
  3. In the "Publishing Settings > Capabilities" section, check the InternetClient capability

直接上Unity的图吧~

3、Demo代码示例~

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Windows.Speech;

public class VoiceDictationDemo : MonoBehaviour
{

    private DictationRecognizer dictationRecognizer;

    // Use this for initialization
    void Start()
    {

        // 定义一个听写对象
        dictationRecognizer = new DictationRecognizer();

        // 注册一个 结果回调 事件
        dictationRecognizer.DictationResult += DictationRecognizer_DictationResult;
        // 注册一个 完成 事件
        dictationRecognizer.DictationComplete += DictationRecognizer_DictationComplete;
        // 注册一个 错误 事件
        dictationRecognizer.DictationError += DictationRecognizer_DictationError;
        // 注册一个 识别语句 的 事件
        dictationRecognizer.DictationHypothesis += DictationRecognizer_DictationHypothesis;

        dictationRecognizer.Start();
    }

    private void DictationRecognizer_DictationHypothesis(string text)
    {
        Debug.Log("进入了Hypothesis 的 事件 回调 ~ " + text);
        dictationRecognizer.Start();
    }

    private void DictationRecognizer_DictationError(string error, int hresult)
    {
        Debug.Log("进入了Error 的 事件 回调 ~ " + error + " 状态码 " + hresult);
        dictationRecognizer.Start();
    }

    private void DictationRecognizer_DictationComplete(DictationCompletionCause cause)
    {

        Debug.Log("进入了Complete 的 事件 回调 ~ " + cause);
        dictationRecognizer.Start();
    }

    private void DictationRecognizer_DictationResult(string text, ConfidenceLevel confidence)
    {
        Debug.Log("进入了Result 的 事件 回调 ~ " + text + " 枚举 " + confidence);
        dictationRecognizer.Start();
    }

    void OnDestroy()
    {
        // 销毁事件
        dictationRecognizer.DictationResult -= DictationRecognizer_DictationResult;
        dictationRecognizer.DictationComplete -= DictationRecognizer_DictationComplete;
        dictationRecognizer.DictationHypothesis -= DictationRecognizer_DictationHypothesis;
        dictationRecognizer.DictationError -= DictationRecognizer_DictationError;
        dictationRecognizer.Dispose();
    }

}

用有道 里面 的英语短视频 做了下测试~ 几乎能达到百分之九十八 以上的 识别率。。感叹微软做的挺不错的~

五、同时使用 语音识别 和 听写 (文档翻译)

If you want to use both phrase recognition and dictation in your app, you'll need to fully shut one down before you can start the other. If you have multiple KeywordRecognizers running, you can shut them all down at once with:
如果你想同时使用 语音识别 和 听写识别,那么你必须关闭一个再启动另外一个~ 如果你有多个语音识别的对象KeywordRecognizers,那么你可以通过下面的方法把他们全部关闭~

PhraseRecognitionSystem.Shutdown();

In order to restore all recognizers to their previous state, after the DictationRecognizer has stopped, you can call:
当然,你也可以恢复关闭前的所有状态,当在你的听写识别结束的时候,你可以调用下面的方法恢复之前的语音识别~

PhraseRecognitionSystem.Restart();

You could also just start a KeywordRecognizer, which will restart the PhraseRecognitionSystem as well.
当然,你也可以只启动一个KeywordRecognizer语音识别对象同样的也是用PhraseRecognitionSystem来控制其暂停或者恢复

posted @ 2017-01-23 12:00  Erma_Jack  阅读(5105)  评论(3编辑  收藏  举报