管理音频焦点
情景:当你的app隐退到后台,而其他也有播放能力的app浮现在前台,这个时候,你可能要暂停你原有app的播放功能,和解除监听Media Button,把控制权交给前台的APP。
这就需要监听音频的焦点。
在开始播放之前,请求焦点,使用AudioManager的requestAudioFocus方法。
当你请求音频焦点,你可以指定你要监听的流类型(比如STREAM_MUSIC)和指定你要占有焦点多久。
当然从编程的角度来看,app获取焦点,其它app失去焦点,你应该都需要有所反应。
示例:请求音频焦点
01 |
AudioManager am = (AudioManager)getSystemService(Context.AUDIO_SERVICE); |
04 |
int result = am.requestAudioFocus(focusChangeListener, |
06 |
AudioManager.STREAM_MUSIC, |
08 |
AudioManager.AUDIOFOCUS_GAIN); |
10 |
if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) { |
应对失去焦点的监听:
01 |
private OnAudioFocusChangeListener focusChangeListener = |
02 |
new OnAudioFocusChangeListener() { |
04 |
public void onAudioFocusChange( int focusChange) { |
06 |
(AudioManager)getSystemService(Context.AUDIO_SERVICE); |
08 |
switch (focusChange) { |
09 |
case (AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK) : |
11 |
mediaPlayer.setVolume( 0 .2f, 0 .2f); |
14 |
case (AudioManager.AUDIOFOCUS_LOSS_TRANSIENT) : |
18 |
case (AudioManager.AUDIOFOCUS_LOSS) : |
20 |
ComponentName component = |
21 |
new ComponentName(AudioPlayerActivity. this , |
22 |
MediaControlReceiver. class ); |
23 |
am.unregisterMediaButtonEventReceiver(component); |
26 |
case (AudioManager.AUDIOFOCUS_GAIN) : |
28 |
mediaPlayer.setVolume(1f, 1f); |
放弃音频焦点:
2 |
(AudioManager)getSystemService(Context.AUDIO_SERVICE); |
4 |
am.abandonAudioFocus(focusChangeListener); |
当你戴上耳机的时候,你可能需要降低音量或者先暂停播放,如何监听这种输出方式的改变呢?
答:
1 |
private class NoisyAudioStreamReceiver extends BroadcastReceiver { |
3 |
public void onReceive(Context context, Intent intent) { |
4 |
if (AudioManager.ACTION_AUDIO_BECOMING_NOISY.equals |
5 |
(intent.getAction())) { |
录音
使用AudioRecord类去录音。创建一个AudioRecorder,指定资源,频率,通道配置,音频编码,和缓冲区大小。
1 |
int bufferSize = AudioRecord.getMinBufferSize(frequency, |
4 |
AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, |
5 |
frequency, channelConfiguration, |
6 |
audioEncoding, bufferSize); |
频率、音频编码、和通道配置会影响录音的大小和质量。
出去私有的考虑,Android需要RECORD_AUDIO权限:
1 |
< uses-permission android:name=”android.permission.RECORD_AUDIO”/> |
当AudioRecorder对象被初始化,然后可以通过startRecording方法去开始异步录音,使用read方法将原始的音频数据放入录音缓冲区:
1 |
audioRecord.startRecording(); |
3 |
[ ... populate the buffer ... ] |
4 |
int bufferReadResult = audioRecord.read(buffer, 0 , bufferSize); |
录下的原始音频数据后,拿什么播放呢?
答:使用AudioTrack去播放该类音频。
录音的例子:
01 |
int frequency = 11025 ; |
02 |
int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO; |
03 |
int audioEncoding = AudioFormat.ENCODING_PCM_16BIT; |
06 |
new File(Environment.getExternalStorageDirectory(), “raw.pcm”); |
11 |
} catch (IOException e) { |
12 |
Log.d(TAG, “IO Exception”, e); |
16 |
OutputStream os = new FileOutputStream(file); |
17 |
BufferedOutputStream bos = new BufferedOutputStream(os); |
18 |
DataOutputStream dos = new DataOutputStream(bos); |
20 |
int bufferSize = AudioRecord.getMinBufferSize(frequency, |
23 |
short [] buffer = new short [bufferSize]; |
26 |
AudioRecord audioRecord = |
27 |
new AudioRecord(MediaRecorder.AudioSource.MIC, |
30 |
audioEncoding, bufferSize); |
31 |
audioRecord.startRecording(); |
34 |
int bufferReadResult = audioRecord.read(buffer, 0 , bufferSize); |
35 |
for ( int i = 0 ; i < bufferReadResult; i++) |
36 |
dos.writeShort(buffer[i]); |
41 |
} catch (Throwable t) { |
42 |
Log.d(TAG, “An error occurred during recording”, t); |
AudioTrack播放声音
1 |
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, |
6 |
AudioTrack.MODE_STREAM); |
注意前面的参数要与你之前录音的参数一致。
2 |
audioTrack.write(audio, 0 , audioLength); |
write方法将原始的音频数据加入到播放缓冲区中。
创建Sound Pool
一般用来播放短促的声音,支持多音频同步播放。
直接看例子:
02 |
SoundPool sp = new SoundPool(maxStreams, AudioManager.STREAM_MUSIC, 0 ); |
04 |
int track1 = sp.load( this , R.raw.track1, 0 ); |
05 |
int track2 = sp.load( this , R.raw.track2, 0 ); |
06 |
int track3 = sp.load( this , R.raw.track3, 0 ); |
08 |
track1Button.setOnClickListener( new OnClickListener() { |
09 |
public void onClick(View v) { |
10 |
sp.play(track1, 1 , 1 , 0 , - 1 , 1 ); |
14 |
track2Button.setOnClickListener( new OnClickListener() { |
15 |
public void onClick(View v) { |
16 |
sp.play(track2, 1 , 1 , 0 , 0 , 1 ); |
20 |
track3Button.setOnClickListener( new OnClickListener() { |
21 |
public void onClick(View v) { |
22 |
sp.play(track3, 1 , 1 , 0 , 0 , 0 .5f); |
26 |
stopButton.setOnClickListener( new OnClickListener() { |
27 |
public void onClick(View v) { |
34 |
chipmunkButton.setOnClickListener( new OnClickListener() { |
35 |
public void onClick(View v) { |
36 |
sp.setRate(track1, 2f); |
Android2.2(Api Level 8)引入两个非常方便的方法,autoPause和autoResume,分别会暂停和运行状态,所有活跃的音频流。
若不再需要这些音频集合,就可以soundPool.release();去释放资源。
照相机拍照
使用Intents去拍照:
1 |
startActivityForResult( |
2 |
new Intent(MediaStore.ACTION_IMAGE_CAPTURE), TAKE_PICTURE); |
当然对应的onActivityResult,默认的返回的照片会以缩略图的形式。
如果想获取完整大小的图片,则需要先指定存储的目标文件,下面例子展示:
02 |
File file = new File(Environment.getExternalStorageDirectory(), |
04 |
Uri outputFileUri = Uri.fromFile(file); |
07 |
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); |
08 |
intent.putExtra(MediaStore.EXTRA_OUTPUT, outputFileUri); |
11 |
startActivityForResult(intent, TAKE_PICTURE); |
注意:一旦你以这种方式启动后,就不会有缩略图返回了,所以所接收到得Intent将为null。
下面这个例子的onActivityResult对这两种情况做了处理:
02 |
protected void onActivityResult( int requestCode, |
03 |
int resultCode, Intent data) { |
04 |
if (requestCode == TAKE_PICTURE) { |
07 |
if (data.hasExtra(“data”)) { |
08 |
Bitmap thumbnail = data.getParcelableExtra(“data”); |
09 |
imageView.setImageBitmap(thumbnail); |
16 |
int width = imageView.getWidth(); |
17 |
int height = imageView.getHeight(); |
19 |
BitmapFactory.Options factoryOptions = new |
20 |
BitmapFactory.Options(); |
22 |
factoryOptions.inJustDecodeBounds = true ; |
23 |
BitmapFactory.decodeFile(outputFileUri.getPath(), |
26 |
int imageWidth = factoryOptions.outWidth; |
27 |
int imageHeight = factoryOptions.outHeight; |
30 |
int scaleFactor = Math.min(imageWidth/width, |
34 |
factoryOptions.inJustDecodeBounds = false ; |
35 |
factoryOptions.inSampleSize = scaleFactor; |
36 |
factoryOptions.inPurgeable = true ; |
39 |
BitmapFactory.decodeFile(outputFileUri.getPath(), |
42 |
imageView.setImageBitmap(bitmap); |
直接控制照相机
首先这个少不了:
1 |
< uses-permission android:name=”android.permission.CAMERA”/> |
获取Camera通过:
Camera camera = Camera.open();
当你使用完了,记得释放资源哦:
camera.release();
照相机的属性
1 |
Camera.Parameters parameters = camera.getParameters(); |
通过此,你可以找到很多关于照相机的属性,有些参数是基于平台版本的。
你可以获得焦点的长度,还有相对水平和垂直的角度,分别通过getFocalLength和get[Horizontal/Vertical]ViewAngle。
Android 2.3(Api Level 9)引入getFocusDistance方法,你可以用来估计镜头和对象之间的距离,此方法会注入一个浮点数组,包含近、远、最优焦点距离;
01 |
float [] focusDistances = new float [ 3 ]; |
03 |
parameters.getFocusDistances(focusDistances); |
06 |
focusDistances[Camera.Parameters.FOCUS_DISTANCE_NEAR_INDEX]; |
08 |
focusDistances[Camera.Parameters.FOCUS_DISTANCE_FAR_INDEX]; |
10 |
focusDistances[Camera.Parameters.FOCUS_DISTANCE_OPTIMAL_INDEX]; |
照相机设置和图像参数
设置参数的方法,类似于set*,从而修改Parameter对象,修改完之后:
camera.setParameters(parameters);
具体参数细节就不介绍了。
使用照相机预览
同样SurfaceView又派上用场了。
看段框架代码:
01 |
public class CameraActivity extends Activity implements |
02 |
SurfaceHolder.Callback { |
04 |
private static final String TAG = “CameraActivity”; |
06 |
private Camera camera; |
09 |
public void onCreate(Bundle savedInstanceState) { |
10 |
super .onCreate(savedInstanceState); |
11 |
setContentView(R.layout.main); |
13 |
SurfaceView surface = (SurfaceView)findViewById(R.id.surfaceView); |
14 |
SurfaceHolder holder = surface.getHolder(); |
15 |
holder.addCallback( this ); |
16 |
holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); |
17 |
holder.setFixedSize( 400 , 300 ); |
20 |
public void surfaceCreated(SurfaceHolder holder) { |
22 |
camera.setPreviewDisplay(holder); |
23 |
camera.startPreview(); |
25 |
} catch (IOException e) { |
26 |
Log.d(TAG, “IO Exception”, e); |
31 |
public void surfaceDestroyed(SurfaceHolder holder) { |
35 |
public void surfaceChanged(SurfaceHolder holder, int format, |
36 |
int width, int height) { |
40 |
protected void onPause() { |
46 |
protected void onResume() { |
48 |
camera = Camera.open(); |
调用camera的setPreviewCallback方法,传入一个PreviewCallback的实现,重写onPreviewFrame方法。
01 |
camera.setPreviewCallback( new PreviewCallback() { |
02 |
public void onPreviewFrame( byte [] data, Camera camera) { |
05 |
Size previewSize = camera.getParameters().getPreviewSize(); |
06 |
YuvImage image = new YuvImage(data, ImageFormat.NV21, |
07 |
previewSize.width, previewSize.height, null ); |
08 |
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); |
11 |
new Rect( 0 , 0 ,previewSize.width, previewSize.height), |
12 |
quality, outputStream); |
Android 4.0加入了人脸识别的API这里就不多说了。
拍照
前面这些都配置过了,那么如何拍照呢?
答:使用camera对象的takePicture方法,传入一个ShutterCallback和两个PictureCallback实现(一个为了RAW,另外一个为了JPEG编码的图像)。
例子:框架代码,拍照和保存JPEG图像到SD卡:
01 |
private void takePicture() { |
02 |
camera.takePicture(shutterCallback, rawCallback, jpegCallback); |
05 |
ShutterCallback shutterCallback = new ShutterCallback() { |
06 |
public void onShutter() { |
11 |
PictureCallback rawCallback = new PictureCallback() { |
12 |
public void onPictureTaken( byte [] data, Camera camera) { |
17 |
PictureCallback jpegCallback = new PictureCallback() { |
18 |
public void onPictureTaken( byte [] data, Camera camera) { |
20 |
FileOutputStream outStream = null ; |
22 |
String path = Environment.getExternalStorageDirectory() + |
25 |
outStream = new FileOutputStream(path); |
26 |
outStream.write(data); |
28 |
} catch (FileNotFoundException e) { |
29 |
Log.e(TAG, “File Note Found”, e); |
30 |
} catch (IOException e) { |
31 |
Log.e(TAG, “IO Exception”, e); |