Android Audio代码分析1 - AudioTrack使用示例
计划从接口的使用,开始分析Audio相关源码。
此处的代码为Android中自带的测试代码。
由于本人惰性,不打算将所有函数全部细说。主要函数,会拿来细细品味;本人认为非主要的函数,将一笔带过。
主要非主要,是从本人当前项目的需要来看的。
*****************************************源码*************************************************
***********************************************************************************************
源码路径:
frameworks\base\media\tests\mediaframeworktest\src\com\android\mediaframeworktest\functional\MediaAudioTrackTest.java
###########################################说明##############################################################
1、TEST_NAME就不作说明了。
2、TEST_SR,是函数AudioTrack.getMinBufferSize的第一个参数。
关于该参数的注释为:
the sample rate expressed in Hertz. 也就是以赫兹为单位的采样率。
函数AudioTrack.getMinBufferSize将会细品,此处就不再累述。
3、TEST_CONF,是函数AudioTrack.getMinBufferSize的第二个参数。
关于该参数的注释为:
describes the configuration of the audio channels.
* See {@link AudioFormat#CHANNEL_OUT_MONO} and
* {@link AudioFormat#CHANNEL_OUT_STEREO}
我们看到,其赋值为AudioFormat.CHANNEL_OUT_MONO。那就先说说AudioFormat。
类AudioFormat的英文注释如下:
/**
* The AudioFormat class is used to access a number of audio format and
* channel configuration constants. They are for instance used
* in {@link AudioTrack} and {@link AudioRecord}.
*
*/
看了下其内容,主要包括各种track和record的channel的定义,和一些格式定义。
我们此处预备创建一个AudioTrack,可用的Channel类型如下:
从以下注释可知,此处的Channel定义,应该与include/media/AudioSystem.h中的保持一致。
// Channel mask definitions must be kept in sync with native values in include/media/AudioSystem.h
4、TEST_FORMAT,是函数AudioTrack.getMinBufferSize的第三个参数。
关于该参数的注释为:
the format in which the audio data is represented.
* See {@link AudioFormat#ENCODING_PCM_16BIT} and
* {@link AudioFormat#ENCODING_PCM_8BIT}
其赋值为AudioFormat.ENCODING_PCM_16BIT。还在类AudioFormat中。
可用的类型如下:
/** Audio data format: PCM 16 bit per sample */
public static final int ENCODING_PCM_16BIT = 2; // accessed by native code
/** Audio data format: PCM 8 bit per sample */
public static final int ENCODING_PCM_8BIT = 3; // accessed by native code
5、TEST_MODE,是AudioTrack的构造函数的第六个参数。
该参数的注释如下:
streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}
其赋值为AudioTrack.MODE_STREAM。是类AudioTrack中定义的常量。
可用的类型有以下两种:
/**
* Creation mode where audio data is transferred from Java to the native layer
* only once before the audio starts playing.
*/
public static final int MODE_STATIC = 0;
/**
* Creation mode where audio data is streamed from Java to the native layer
* as the audio is playing.
*/
public static final int MODE_STREAM = 1;
看了下类AudioTrack的注释,其中大部分内容都是说MODE_STATIC与MODE_STREAM的差别的。
注释如下:
/**
* The AudioTrack class manages and plays a single audio resource for Java applications.
* It allows to stream PCM audio buffers to the audio hardware for playback. This is
* achieved by "pushing" the data to the AudioTrack object using one of the
* {@link #write(byte[], int, int)} and {@link #write(short[], int, int)} methods.
*
* <p>An AudioTrack instance can operate under two modes: static or streaming.<br>
* In Streaming mode, the application writes a continuous stream of data to the AudioTrack, using
* one of the write() methods. These are blocking and return when the data has been transferred
* from the Java layer to the native layer and queued for playback. The streaming mode
* is most useful when playing blocks of audio data that for instance are:
* <ul>
* <li>too big to fit in memory because of the duration of the sound to play,</li>
* <li>too big to fit in memory because of the characteristics of the audio data
* (high sampling rate, bits per sample ...)</li>
* <li>received or generated while previously queued audio is playing.</li>
* </ul>
* The static mode is to be chosen when dealing with short sounds that fit in memory and
* that need to be played with the smallest latency possible. AudioTrack instances in static mode
* can play the sound without the need to transfer the audio data from Java to native layer
* each time the sound is to be played. The static mode will therefore be preferred for UI and
* game sounds that are played often, and with the smallest overhead possible.
*
* <p>Upon creation, an AudioTrack object initializes its associated audio buffer.
* The size of this buffer, specified during the construction, determines how long an AudioTrack
* can play before running out of data.<br>
* For an AudioTrack using the static mode, this size is the maximum size of the sound that can
* be played from it.<br>
* For the streaming mode, data will be written to the hardware in chunks of
* sizes inferior to the total buffer size.
*/
主要内容是说:
MODE_STREAM是采用流的方式。也就是说,随着文件的播放,不停地有数据从Java层传到Native层。
这中模式适合比较大的,并且对延迟没有要求的音频文件。
MODE_STATIC是一次将数据从Java层传到Native层。
这种模式时候数据量小(应为要存在内存中,要考虑内存消耗),并且对延迟有要求的音频。
详细说明,可以仔细阅读英文注释。
6、TEST_STREAM_TYPE,是类AudioTrack构造函数中的第一个参数。
该参数的注释如下:
the type of the audio stream. See
* {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM},
* {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC} and
* {@link AudioManager#STREAM_ALARM}
赋值的类型为 AudioManager.STREAM_MUSIC,是类AudioManager中定义的常量。
可用的有以下十种类型:
类AudioManager的注释如下:
AudioManager provides access to volume and ringer mode control.
各种类型的赋值都是从类AudioSystem中而来,类AudioSystem中的相关定义如下:
从注释中可知,需要将此处的定义与Native层的正确关联。
并且,如果这些内容改变,需要更新Settings.System.VOLUME_SETTINGS和attrs.xml。
7、下面是代码:
int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);
从函数的名字可知,是获取最小Buffer的大小。也就是说,如果想让我正常工作,至少要给我这些Buffer。
提该要求的依据有采样率、Channel数量和样本大小(8BIT还是16BIT)。
8、接下来就是创建一个AudioTrack对象:
AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,
2*minBuffSize, TEST_MODE);
参数中,TEST_SR, TEST_CONF, TEST_FORMAT和函数AudioTrack.getMinBufferSize的相同。
TEST_STREAM_TYPE是流动类型。
minBuffSize是上面请求到的最小的Buffer Size。不过此处为何会乘以个2???
看了下类AudioTrack的构造函数中的注释:
* @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read
* from for playback. If using the AudioTrack in streaming mode, you can write data into
* this buffer in smaller chunks than this size. If using the AudioTrack in static mode,
* this is the maximum size of the sound that will be played for this instance.
* See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size
* for the successful creation of an AudioTrack instance in streaming mode. Using values
* smaller than getMinBufferSize() will result in an initialization failure.
还是不明白。
再看下函数getMinBufferSize的注释:
* @return {@link #ERROR_BAD_VALUE} if an invalid parameter was passed,
* or {@link #ERROR} if the implementation was unable to query the hardware for its output
* properties,
* or the minimum buffer size expressed in bytes.
函数getMinBufferSize的返回值是以byte为单位,AudioTrack构造函数中的参数也是以byte为单位,况且接下来的语句:
byte data[] = new byte[minBuffSize];
创建的buffer的大小也是minBuffSize。
究竟为何乘个2???
AudioTrack的构造函数中会做Buffer size check:
audioBuffSizeCheck(bufferSizeInBytes);
函数audioBuffSizeCheck的注释如下:
// Convenience method for the contructor's audio buffer size check.
// preconditions:
// mChannelCount is valid
// mAudioFormat is valid
// postcondition:
// mNativeBufferSizeInBytes is valid (multiple of frame size, positive)
需要保证buffer size为正数,并且是frame的整数倍。
frame是个嘛概念?看看函数audioBuffSizeCheck的实现:
我们的Channel为AudioFormat.CHANNEL_OUT_MONO,所以mChannelCount为1,mAudioFormat为2,所以frameSizeInBytes等于2。
如果audioBufferSize不是2(frameSizeInBytes)的整数倍,将会抛出异常!!!
纳炉嚎啕!!!
9、创建Buffer:
byte data[] = new byte[minBuffSize];
可以,Java层中Buffer大小仍然为minBuffSize。
乘以2的,是传给Native层的:
也就是说,要保证Native中,buffer的大小为frame的整数倍。
10、接下来是状态判断:
assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED);
Android中Media操作时,涉及到一个状态问题。
也就是说,从一个状态,只能迁移到特定的一个或多个状态。即,需要在特定的状态下操作才有效,否则将导致错误。
函数getState的注释:
/**
* Returns the state of the AudioTrack instance. This is useful after the
* AudioTrack instance has been created to check if it was initialized
* properly. This ensures that the appropriate hardware resources have been
* acquired.
* @see #STATE_INITIALIZED
* @see #STATE_NO_STATIC_DATA
* @see #STATE_UNINITIALIZED
*/
11、下面开始写数据:
assertTrue(TEST_NAME,
track.write(data, 0, data.length) == data.length);
write函数将会细品,此处不再累述。
其注释如下,可以先对其有个大致了解:
/**
* Writes the audio data to the audio hardware for playback.
* @param audioData the array that holds the data to play.
* @param offsetInBytes the offset expressed in bytes in audioData where the data to play
* starts.
* @param sizeInBytes the number of bytes to read in audioData after the offset.
* @return the number of bytes that were written or {@link #ERROR_INVALID_OPERATION}
* if the object wasn't properly initialized, or {@link #ERROR_BAD_VALUE} if
* the parameters don't resolve to valid data and indexes.
*/
12、最后一步操作:
track.release();
其实现:
先调用自己的stop函数,然后再调到native层中的native_release函数。
stop函数的实现:
先判断状态,然后调到native层的native_stop函数。
如果从Java层调到native层?是通过JNI机制。
就不在此介绍JNI机制了。
上面提到的两个native中的函数,都是在文件:frameworks\base\core\jni\android_media_AudioTrack.cpp
中进行关联的:
native_stop对应的函数为android_media_AudioTrack_stop:
native_release对应的函数为android_media_AudioTrack_native_release:
函数android_media_AudioTrack_native_finalize的实现:
函数android_media_AudioTrack_stop和android_media_AudioTrack_native_finalize都调用了函数env->GetIntField:
AudioTrack *lpTrack = (AudioTrack *)env->GetIntField(
thiz, javaAudioTrackFields.nativeTrackInJavaObj);
看意思应该是获取Java侧保存的native的Track对象。
既然此处是Get,那就应该有地方去Set。
不错,上面的函数android_media_AudioTrack_native_release中就有去Set:
env->SetIntField(thiz, javaAudioTrackFields.nativeTrackInJavaObj, 0);
不过,此处是将其清0。
真正Set的地方在哪儿?一个字,搜!
且慢,先看看javaAudioTrackFields是个嘛东东:
原来是为了提供一个从C++访问...的区域,此处...应该是Java,是不是以后也扩展到其他语言?
其中nativeTrackInJavaObj是保存在Java侧的native的AudioTrack对象。
继续刚才的话题,搜!
原来在函数android_media_AudioTrack_native_setup中调用了函数 env->SetIntField来实现set的。
文件路径:frameworks\base\core\jni\android_media_AudioTrack.cpp
与刚才从Java层调到native中的入口相同,也就是说函数android_media_AudioTrack_native_setup应该也是从Java层调过来的。
找了下对应表,果然,对应的是native_setup函数。
函数android_media_AudioTrack_native_setup的内容就先不细嚼了,大致处理如下:
参数及状态检查
创建一个AudioTrack对象。
调用AudioTrack对象的一些初始化和设置函数。
最后将AudioTrack对象通过env->SetIntField函数保存到Java层。
与此类似处理的还有一个AudioTrackJniStorage对象。
总结一下使用示例:
1、首先根据采用率,样本大小,声道数获取一个最小需要的buffersize。
2、根据流的类型,模式(stream或static),1中获取的最小buffersize(为了native中的buffer size是frame的整数倍,此处乘了个2),以及采用率,样本大小,声道数来创建
一个AudioTrack。此处的AudioTrack是Java中的类,其构造函数最终会调到native中,并创建一个native中的AudioTrack类,并通过函数env->SetIntField将其保存到Java的
AudioTrack对象中。
3、调用AudioTrack对象的write函数,此处直接掉的是Java的AudioTrack对象,函数write中应该会调到native中的AudioTrack对象。信不信由你,反正我是信了。
4、调用release函数,停止播放,并释放资源。
此处的代码为Android中自带的测试代码。
由于本人惰性,不打算将所有函数全部细说。主要函数,会拿来细细品味;本人认为非主要的函数,将一笔带过。
主要非主要,是从本人当前项目的需要来看的。
*****************************************源码*************************************************
***********************************************************************************************
源码路径:
frameworks\base\media\tests\mediaframeworktest\src\com\android\mediaframeworktest\functional\MediaAudioTrackTest.java
###########################################说明##############################################################
1、TEST_NAME就不作说明了。
2、TEST_SR,是函数AudioTrack.getMinBufferSize的第一个参数。
关于该参数的注释为:
the sample rate expressed in Hertz. 也就是以赫兹为单位的采样率。
函数AudioTrack.getMinBufferSize将会细品,此处就不再累述。
3、TEST_CONF,是函数AudioTrack.getMinBufferSize的第二个参数。
关于该参数的注释为:
describes the configuration of the audio channels.
* See {@link AudioFormat#CHANNEL_OUT_MONO} and
* {@link AudioFormat#CHANNEL_OUT_STEREO}
我们看到,其赋值为AudioFormat.CHANNEL_OUT_MONO。那就先说说AudioFormat。
类AudioFormat的英文注释如下:
/**
* The AudioFormat class is used to access a number of audio format and
* channel configuration constants. They are for instance used
* in {@link AudioTrack} and {@link AudioRecord}.
*
*/
看了下其内容,主要包括各种track和record的channel的定义,和一些格式定义。
我们此处预备创建一个AudioTrack,可用的Channel类型如下:
从以下注释可知,此处的Channel定义,应该与include/media/AudioSystem.h中的保持一致。
// Channel mask definitions must be kept in sync with native values in include/media/AudioSystem.h
4、TEST_FORMAT,是函数AudioTrack.getMinBufferSize的第三个参数。
关于该参数的注释为:
the format in which the audio data is represented.
* See {@link AudioFormat#ENCODING_PCM_16BIT} and
* {@link AudioFormat#ENCODING_PCM_8BIT}
其赋值为AudioFormat.ENCODING_PCM_16BIT。还在类AudioFormat中。
可用的类型如下:
/** Audio data format: PCM 16 bit per sample */
public static final int ENCODING_PCM_16BIT = 2; // accessed by native code
/** Audio data format: PCM 8 bit per sample */
public static final int ENCODING_PCM_8BIT = 3; // accessed by native code
5、TEST_MODE,是AudioTrack的构造函数的第六个参数。
该参数的注释如下:
streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}
其赋值为AudioTrack.MODE_STREAM。是类AudioTrack中定义的常量。
可用的类型有以下两种:
/**
* Creation mode where audio data is transferred from Java to the native layer
* only once before the audio starts playing.
*/
public static final int MODE_STATIC = 0;
/**
* Creation mode where audio data is streamed from Java to the native layer
* as the audio is playing.
*/
public static final int MODE_STREAM = 1;
看了下类AudioTrack的注释,其中大部分内容都是说MODE_STATIC与MODE_STREAM的差别的。
注释如下:
/**
* The AudioTrack class manages and plays a single audio resource for Java applications.
* It allows to stream PCM audio buffers to the audio hardware for playback. This is
* achieved by "pushing" the data to the AudioTrack object using one of the
* {@link #write(byte[], int, int)} and {@link #write(short[], int, int)} methods.
*
* <p>An AudioTrack instance can operate under two modes: static or streaming.<br>
* In Streaming mode, the application writes a continuous stream of data to the AudioTrack, using
* one of the write() methods. These are blocking and return when the data has been transferred
* from the Java layer to the native layer and queued for playback. The streaming mode
* is most useful when playing blocks of audio data that for instance are:
* <ul>
* <li>too big to fit in memory because of the duration of the sound to play,</li>
* <li>too big to fit in memory because of the characteristics of the audio data
* (high sampling rate, bits per sample ...)</li>
* <li>received or generated while previously queued audio is playing.</li>
* </ul>
* The static mode is to be chosen when dealing with short sounds that fit in memory and
* that need to be played with the smallest latency possible. AudioTrack instances in static mode
* can play the sound without the need to transfer the audio data from Java to native layer
* each time the sound is to be played. The static mode will therefore be preferred for UI and
* game sounds that are played often, and with the smallest overhead possible.
*
* <p>Upon creation, an AudioTrack object initializes its associated audio buffer.
* The size of this buffer, specified during the construction, determines how long an AudioTrack
* can play before running out of data.<br>
* For an AudioTrack using the static mode, this size is the maximum size of the sound that can
* be played from it.<br>
* For the streaming mode, data will be written to the hardware in chunks of
* sizes inferior to the total buffer size.
*/
主要内容是说:
MODE_STREAM是采用流的方式。也就是说,随着文件的播放,不停地有数据从Java层传到Native层。
这中模式适合比较大的,并且对延迟没有要求的音频文件。
MODE_STATIC是一次将数据从Java层传到Native层。
这种模式时候数据量小(应为要存在内存中,要考虑内存消耗),并且对延迟有要求的音频。
详细说明,可以仔细阅读英文注释。
6、TEST_STREAM_TYPE,是类AudioTrack构造函数中的第一个参数。
该参数的注释如下:
the type of the audio stream. See
* {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM},
* {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC} and
* {@link AudioManager#STREAM_ALARM}
赋值的类型为 AudioManager.STREAM_MUSIC,是类AudioManager中定义的常量。
可用的有以下十种类型:
类AudioManager的注释如下:
AudioManager provides access to volume and ringer mode control.
各种类型的赋值都是从类AudioSystem中而来,类AudioSystem中的相关定义如下:
从注释中可知,需要将此处的定义与Native层的正确关联。
并且,如果这些内容改变,需要更新Settings.System.VOLUME_SETTINGS和attrs.xml。
7、下面是代码:
int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);
从函数的名字可知,是获取最小Buffer的大小。也就是说,如果想让我正常工作,至少要给我这些Buffer。
提该要求的依据有采样率、Channel数量和样本大小(8BIT还是16BIT)。
8、接下来就是创建一个AudioTrack对象:
AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,
2*minBuffSize, TEST_MODE);
参数中,TEST_SR, TEST_CONF, TEST_FORMAT和函数AudioTrack.getMinBufferSize的相同。
TEST_STREAM_TYPE是流动类型。
minBuffSize是上面请求到的最小的Buffer Size。不过此处为何会乘以个2???
看了下类AudioTrack的构造函数中的注释:
* @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read
* from for playback. If using the AudioTrack in streaming mode, you can write data into
* this buffer in smaller chunks than this size. If using the AudioTrack in static mode,
* this is the maximum size of the sound that will be played for this instance.
* See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size
* for the successful creation of an AudioTrack instance in streaming mode. Using values
* smaller than getMinBufferSize() will result in an initialization failure.
还是不明白。
再看下函数getMinBufferSize的注释:
* @return {@link #ERROR_BAD_VALUE} if an invalid parameter was passed,
* or {@link #ERROR} if the implementation was unable to query the hardware for its output
* properties,
* or the minimum buffer size expressed in bytes.
函数getMinBufferSize的返回值是以byte为单位,AudioTrack构造函数中的参数也是以byte为单位,况且接下来的语句:
byte data[] = new byte[minBuffSize];
创建的buffer的大小也是minBuffSize。
究竟为何乘个2???
AudioTrack的构造函数中会做Buffer size check:
audioBuffSizeCheck(bufferSizeInBytes);
函数audioBuffSizeCheck的注释如下:
// Convenience method for the contructor's audio buffer size check.
// preconditions:
// mChannelCount is valid
// mAudioFormat is valid
// postcondition:
// mNativeBufferSizeInBytes is valid (multiple of frame size, positive)
需要保证buffer size为正数,并且是frame的整数倍。
frame是个嘛概念?看看函数audioBuffSizeCheck的实现:
我们的Channel为AudioFormat.CHANNEL_OUT_MONO,所以mChannelCount为1,mAudioFormat为2,所以frameSizeInBytes等于2。
如果audioBufferSize不是2(frameSizeInBytes)的整数倍,将会抛出异常!!!
纳炉嚎啕!!!
9、创建Buffer:
byte data[] = new byte[minBuffSize];
可以,Java层中Buffer大小仍然为minBuffSize。
乘以2的,是传给Native层的:
也就是说,要保证Native中,buffer的大小为frame的整数倍。
10、接下来是状态判断:
assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED);
Android中Media操作时,涉及到一个状态问题。
也就是说,从一个状态,只能迁移到特定的一个或多个状态。即,需要在特定的状态下操作才有效,否则将导致错误。
函数getState的注释:
/**
* Returns the state of the AudioTrack instance. This is useful after the
* AudioTrack instance has been created to check if it was initialized
* properly. This ensures that the appropriate hardware resources have been
* acquired.
* @see #STATE_INITIALIZED
* @see #STATE_NO_STATIC_DATA
* @see #STATE_UNINITIALIZED
*/
11、下面开始写数据:
assertTrue(TEST_NAME,
track.write(data, 0, data.length) == data.length);
write函数将会细品,此处不再累述。
其注释如下,可以先对其有个大致了解:
/**
* Writes the audio data to the audio hardware for playback.
* @param audioData the array that holds the data to play.
* @param offsetInBytes the offset expressed in bytes in audioData where the data to play
* starts.
* @param sizeInBytes the number of bytes to read in audioData after the offset.
* @return the number of bytes that were written or {@link #ERROR_INVALID_OPERATION}
* if the object wasn't properly initialized, or {@link #ERROR_BAD_VALUE} if
* the parameters don't resolve to valid data and indexes.
*/
12、最后一步操作:
track.release();
其实现:
先调用自己的stop函数,然后再调到native层中的native_release函数。
stop函数的实现:
先判断状态,然后调到native层的native_stop函数。
如果从Java层调到native层?是通过JNI机制。
就不在此介绍JNI机制了。
上面提到的两个native中的函数,都是在文件:frameworks\base\core\jni\android_media_AudioTrack.cpp
中进行关联的:
native_stop对应的函数为android_media_AudioTrack_stop:
native_release对应的函数为android_media_AudioTrack_native_release:
函数android_media_AudioTrack_native_finalize的实现:
函数android_media_AudioTrack_stop和android_media_AudioTrack_native_finalize都调用了函数env->GetIntField:
AudioTrack *lpTrack = (AudioTrack *)env->GetIntField(
thiz, javaAudioTrackFields.nativeTrackInJavaObj);
看意思应该是获取Java侧保存的native的Track对象。
既然此处是Get,那就应该有地方去Set。
不错,上面的函数android_media_AudioTrack_native_release中就有去Set:
env->SetIntField(thiz, javaAudioTrackFields.nativeTrackInJavaObj, 0);
不过,此处是将其清0。
真正Set的地方在哪儿?一个字,搜!
且慢,先看看javaAudioTrackFields是个嘛东东:
原来是为了提供一个从C++访问...的区域,此处...应该是Java,是不是以后也扩展到其他语言?
其中nativeTrackInJavaObj是保存在Java侧的native的AudioTrack对象。
继续刚才的话题,搜!
原来在函数android_media_AudioTrack_native_setup中调用了函数 env->SetIntField来实现set的。
文件路径:frameworks\base\core\jni\android_media_AudioTrack.cpp
与刚才从Java层调到native中的入口相同,也就是说函数android_media_AudioTrack_native_setup应该也是从Java层调过来的。
找了下对应表,果然,对应的是native_setup函数。
函数android_media_AudioTrack_native_setup的内容就先不细嚼了,大致处理如下:
参数及状态检查
创建一个AudioTrack对象。
调用AudioTrack对象的一些初始化和设置函数。
最后将AudioTrack对象通过env->SetIntField函数保存到Java层。
与此类似处理的还有一个AudioTrackJniStorage对象。
总结一下使用示例:
1、首先根据采用率,样本大小,声道数获取一个最小需要的buffersize。
2、根据流的类型,模式(stream或static),1中获取的最小buffersize(为了native中的buffer size是frame的整数倍,此处乘了个2),以及采用率,样本大小,声道数来创建
一个AudioTrack。此处的AudioTrack是Java中的类,其构造函数最终会调到native中,并创建一个native中的AudioTrack类,并通过函数env->SetIntField将其保存到Java的
AudioTrack对象中。
3、调用AudioTrack对象的write函数,此处直接掉的是Java的AudioTrack对象,函数write中应该会调到native中的AudioTrack对象。信不信由你,反正我是信了。
4、调用release函数,停止播放,并释放资源。