讯飞SDK 植入 Unity5.6.0的 Android Studio 方法

首先感谢几位大神:

http://blog.csdn.net/qq_15267341/article/details/52074225

http://blog.csdn.net/weixin_36303734/article/details/54898166

http://www.jianshu.com/p/5459aa19456a

网上有很多关于讯飞Sdk植入到Unity3d的博客教程,这里做一个总结,专为超级初学者。

第一步,在科大讯飞官网上下载语音Sdk包

参考这篇博客:http://blog.csdn.net/qq_15267341/article/details/52074225

获得Sdk:

第二步:生成Android Jar包

参考这篇博客:http://blog.csdn.net/weixin_36303734/article/details/54898166,

Android版本为24,可以根据需要适当更改。

流程如下:

1、AS新建工程,EmptyActivity,这样省了在AndroidManifest添加权限的步骤,同时可以删除layout文件夹中的布局文件,因为导入到unity中用不到。

2、File-New-New Module,新建AndroidLibrary,命名随意,在这里命名为speechrecognizer2。

3、将讯飞SDK文件夹中的MSC.jar考到libs文件夹下。在main文件夹下新建文件夹jinLibs,将SDK中的so文件考进来,如图:

 

 

 

4、在Unity安装目录下找到classes.jar文件。同样,考到libs文件夹下。路径:

\Unity\Editor\Data\PlaybackEngines\AndroidPlayer\Variations\mono\Release\Classes

5、此时需要在工程中关联jar。File-Project Structure。左侧选中步骤二中新建的Library。点击右方加号,选择Files dependency。将步骤3、4中加入的jar包关联到module。如图:

 

 

 

6、设置Android studio 的Android Sdk位置和Java jdk位置,File-Project Structure-SDK Location;

 

7、环境搭建完成后,开始写代码。

MainActivity中的代码和网上其他的方法大同小异,直接上源码:

package com.ssm.ssm.speechrecognizer;

import android.os.Bundle;
import android.util.Log;
import android.widget.Toast;

import com.iflytek.cloud.InitListener;
import com.iflytek.cloud.RecognizerListener;
import com.iflytek.cloud.RecognizerResult;
import com.iflytek.cloud.SpeechConstant;
import com.iflytek.cloud.SpeechError;
import com.iflytek.cloud.SpeechSynthesizer;
import com.iflytek.cloud.SpeechUtility;
import com.iflytek.cloud.SpeechRecognizer;

import com.iflytek.cloud.SynthesizerListener;
import com.unity3d.player.UnityPlayer;
import com.unity3d.player.UnityPlayerActivity;

import org.json.JSONArray;
import org.json.JSONObject;
import org.json.JSONTokener;

public class MainActivity extends UnityPlayerActivity {

public SpeechRecognizer speechRecognizer;
public SpeechSynthesizer speechSynthesizer;
private String ttsSpeakerName = "yefang";
private String ttsSpeakerPitch = "50";

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);

//注意这里的appid为自己的sdk id
SpeechUtility.createUtility(getApplicationContext(),"appid=59dccd45");

initRecognizer();
}

//初始化
private void initRecognizer(){
speechRecognizer = SpeechRecognizer.createRecognizer(getApplicationContext(),mInitListener);

speechSynthesizer = SpeechSynthesizer.createSynthesizer(getApplicationContext(),mInitListener);
}

public InitListener mInitListener = new InitListener() {
@Override
public void onInit(int i) {
UnityPlayer.UnitySendMessage("Manager", "Result", "init success!");
}
};

public void setTTSSpeaker(String targetName) {
ttsSpeakerName = targetName;
}

public void setTTSPitch(String targetPitch) {
ttsSpeakerPitch = targetPitch;
}

public void doTTS(String ttsStr){
UnityPlayer.UnitySendMessage("Manager", "IsSpeaking", "doTTS");

//设置发音人
speechSynthesizer.setParameter(SpeechConstant.VOICE_NAME,ttsSpeakerName);
//设置音调
speechSynthesizer.setParameter(SpeechConstant.PITCH,ttsSpeakerPitch);
//设置音量
speechSynthesizer.setParameter(SpeechConstant.VOLUME,"50");
int code = speechSynthesizer.startSpeaking(ttsStr, mTTSListener);
}

private SynthesizerListener mTTSListener = new SynthesizerListener() {
@Override
public void onSpeakBegin() {
// UnityPlayer.UnitySendMessage("Manager", "IsSpeaking", "onSpeakBegin ");
}

@Override
public void onBufferProgress(int i, int i1, int i2, String s) {
// UnityPlayer.UnitySendMessage("Manager", "IsSpeaking", "onBufferProgress ");
}

@Override
public void onSpeakPaused() {
// UnityPlayer.UnitySendMessage("Manager", "IsSpeaking", "onSpeakPaused ");
}

@Override
public void onSpeakResumed() {
// UnityPlayer.UnitySendMessage("Manager", "IsSpeaking", "onSpeakResumed ");
}

@Override
public void onSpeakProgress(int i, int i1, int i2) {
// UnityPlayer.UnitySendMessage("Manager", "IsSpeaking", "onSpeakProgress ");
}

@Override
public void onCompleted(SpeechError speechError) {
// UnityPlayer.UnitySendMessage("Manager", "IsSpeaking", "onCompleted ");
}

@Override
public void onEvent(int i, int i1, int i2, Bundle bundle) {

}
};

//开始听写
public void startSpeechListener(){

UnityPlayer.UnitySendMessage("Manager", "Result", "");

speechRecognizer.setParameter(SpeechConstant.DOMAIN, "iat");
speechRecognizer.setParameter(SpeechConstant.LANGUAGE, "zh_cn");
speechRecognizer.setParameter(SpeechConstant.ACCENT, "mandarin");
speechRecognizer.startListening(mRecognizerListener);
}

public RecognizerListener mRecognizerListener = new RecognizerListener(){

@Override
public void onBeginOfSpeech() {
// TODO Auto-generated method stub
// UnityPlayer.UnitySendMessage("Manager", "Result", "onBeginOfSpeech ");
}

@Override
public void onEndOfSpeech() {
// TODO Auto-generated method stub
// UnityPlayer.UnitySendMessage("Manager", "Result", "onEndOfSpeech ");
//startSpeechListener();
//UnityPlayer.UnitySendMessage("Manager", "SpeechEnd","");
}

@Override
public void onError(SpeechError arg0) {
// TODO Auto-generated method stub
// UnityPlayer.UnitySendMessage("Manager", "Result", "onError ");
}

@Override
public void onEvent(int arg0, int arg1, int arg2, Bundle arg3) {
// TODO Auto-generated method stub
// UnityPlayer.UnitySendMessage("Manager", "Result", "onEvent ");
}

@Override
public void onResult(RecognizerResult recognizerResult, boolean isLast) {
// UnityPlayer.UnitySendMessage("Manager", "Result", "listener ");
printResult(recognizerResult);

//if(isLast)
//startSpeechListener();
}

@Override
public void onVolumeChanged(int arg0, byte[] arg1) {
// UnityPlayer.UnitySendMessage("Manager", "Result", "onVolumeChanged ");
// TODO Auto-generated method stub
}
};

//解析
private void printResult(RecognizerResult results) {
String json = results.getResultString();

StringBuffer ret = new StringBuffer();
try {
JSONTokener tokener = new JSONTokener(json);
JSONObject joResult = new JSONObject(tokener);

JSONArray words = joResult.getJSONArray("ws");
for (int i = 0; i < words.length(); i++) {
// 转写结果词,默认使用第一个结果
JSONArray items = words.getJSONObject(i).getJSONArray("cw");
JSONObject obj = items.getJSONObject(0);
ret.append(obj.getString("w"));
}
} catch (Exception e) {
e.printStackTrace();
}

//将解析结果“"result:" + ret.toString()”发送至“Manager”这个GameObject,中的“Result”函数
UnityPlayer.UnitySendMessage("Manager", "Result", ret.toString());
}

public void ShowToast(final String mStr2Show){

UnityPlayer.UnitySendMessage("Manager", "Result", "toast");

runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getApplicationContext(),mStr2Show,Toast.LENGTH_LONG).show();
}
});
}
}

特别注意几个地方:

(1)此位置名字必须与unity打包Apk id位置一样。

(2)输出解析语音后的代码,unity scene必须有一个名为Manager的对象,且脚本有一个Result的函数。

 

AndroidManifest中添加权限,同样,和其他教程大同小异,源码:

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.ssm.ssm.speechrecognizer">

<application
android:allowBackup="true"
android:label="@string/app_name"
android:supportsRtl="true">
<activity android:name=".MainActivity"
android:label="@string/app_name">
<intent-filter>
<action android:name="android.intent.action.MAIN" />

<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
<meta-data android:name="unityplayer.UnityActivity" android:value="true" />
</activity>
</application>

<!--连接网络权限,用于执行云端语音能力 -->
<uses-permission android:name="android.permission.INTERNET"/>
<!--获取手机录音机使用权限,听写、识别、语义理解需要用到此权限 -->
<uses-permission android:name="android.permission.RECORD_AUDIO"/>
<!--读取网络信息状态 -->
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
<!--获取当前wifi状态 -->
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE"/>
<!--允许程序改变网络连接状态 -->
<uses-permission android:name="android.permission.CHANGE_NETWORK_STATE"/>
<!--读取手机信息权限 -->
<uses-permission android:name="android.permission.READ_PHONE_STATE"/>
<!--读取联系人权限,上传联系人需要用到此权限 -->
<uses-permission android:name="android.permission.READ_CONTACTS"/>
<!--外存储写权限,构建语法需要用到此权限 -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<!--外存储读权限,构建语法需要用到此权限 -->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<!--配置权限,用来记录应用配置信息 -->
<uses-permission android:name="android.permission.WRITE_SETTINGS"/>
<!--摄相头权限,拍照需要用到 -->
<uses-permission android:name="android.permission.CAMERA" />

</manifest>

此段权限用于在unity交互,不添加在unity打包时会有警告信息出现(如下图);

 

 

 

此时,作为AS,为了导出jar,需要在build.gradle底部添加以下两段代码:

task makeJar(type: Copy) {
delete 'build/libs/speechrecognizer.jar'
from('build/intermediates/bundles/release/')
into('build/libs/')
include('classes.jar')
rename ('classes.jar', 'speechrecognizer.jar')
}

makeJar.dependsOn(build)

// 在终端执行生成JAR包
// gradlew makeJar

dependencies {
compile fileTree(include: ['*.jar'], dir: 'libs')
compile files('libs/classes.jar')
}

 

 

8、至此,jar包已经编写完成。现在导出jar。在Terminal中输入 gradlew makejar。如图:

  

 

 

9、等待完成后即可在libs下看到jar文件:

 

 第三步:新建Unity项目,生成Apk

 

10、新建Unity项目,在Asset文件夹下新建如下结构: 

 

 

 

11、如步骤9,在bin文件夹下拷贝AS导出的jar。libs文件夹下拷贝SDK中的MSC.jar,和os文件,注意,安卓5.0以上系统需要armeabi-v7a,不然会出现21002的错误。

最后在Android文件夹下拷贝AndroidManifest文件。

 

12、新建编写Manger脚本文件,取名为XunFeiTest。源码

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;

public class XunFeiTest : MonoBehaviour {

// Use this for initialization
void Start () {

}

// Update is called once per frame
void Update () {

}

private string showResult = "";
public Text shurukuang;


public void Kaishi ()
{
AndroidJavaClass jc = new AndroidJavaClass ("com.unity3d.player.UnityPlayer");
AndroidJavaObject jo = jc.GetStatic<AndroidJavaObject> ("currentActivity");
jo.Call ("startSpeechListener");
}

public void Result (string recognizerResult)
{
showResult += recognizerResult;
showResult += '\n';
shurukuang.text = showResult;
}
}

 

13、新建一个开始录音启动button,和显示text,修改main camera相机名字为Manger,并拖动上述脚本到Manger对象。

将场景中的Result Text关联到脚本中text对象上。

关联button onclick为脚本的kaishi函数

 

 

14、场景与脚本已经配置完毕,下面开始打包生成Apk,File-Build Setting

 

 

15、点击Build-选择生成位置和名字,就可以生成安装文件啦。发布手机截图

 

posted @ 2017-10-15 14:30  fullnamefull  阅读(1800)  评论(0编辑  收藏  举报