侧边栏
首页代码

RTMP录屏直播屏幕数据获取与MediaCodec编码

前言

本文介绍的是MediaProjection录屏、麦克风采集的音、视频进行编码。然后通过librtmp推送到流媒体服务器上的功能。

RTMP直播实现流程

image

一、录屏推流实现的步骤

  1. 采集数据

    主要是采集屏幕获得视频数据采集麦克风获得音频数据,如果可以实现的话,我们还可以采集一些应用内置的音频数据。
  2. 数据格式转换

    主要是将获取到的视频和音频转换成常见的推流的标准格式,这样能保证让观看终端正常观看。
  3. 编码处理

    如果不进行编码的话,数据量会非常大,这样不仅浪费带宽,而且会浪费观看终端的性能,所以需要对音视频数据进行编码处理
  4. 封包&推流

    这块的逻辑可以采用和普通的直播方式进行封装和推流

总结:其实录屏推流直播和普通的直播的区别就是采集源发生了变化,而在技术层面来将真正需要我们做的事情就是将录屏获取到的数据处理成稳定的编码格式。

视频采集——MediaProjection

视频采集流程

首先来说MediaProjectionManager,它是一个系统级的服务,类似WindowManager,AlarmManager等,你可以通过getSystemService方法来获取它的实例:

MediaProjectionManager mediaProjectionManager = (MediaProjectionManager) activity.getSystemService(Context.MEDIA_PROJECTION_SERVICE);

获取到实例后,录像的过程如下
首先:

Intent screenCaptureIntent = mediaProjectionManager.createScreenCaptureIntent();
activity.startActivityForResult(screenCaptureIntent,100);

createScreenCaptureIntent()方法的注释如下:

/**
* Returns an Intent that <b>must</b> passed to startActivityForResult()
* in order to start screen capture. The activity will prompt
* the user whether to allow screen capture. The result of this
* activity should be passed to getMediaProjection.
*/

它大致意思是,这个方法会返回一个intent,你可以通过startActivityForResult方法来传递这个intent,为了能开始屏幕捕捉,activity会提示用户是否允许屏幕捕捉(为了防止开发者做一个木马,来捕获用户私人信息),你可以通过getMediaProjection来获取屏幕捕捉的结果。

createScreenCaptureIntent()的代码我们可以看一下

public Intent createScreenCaptureIntent() {
Intent i = new Intent();
final ComponentName mediaProjectionPermissionDialogComponent =
ComponentName.unflattenFromString(mContext.getResources().getString(
com.android.internal.R.string
.config_mediaProjectionPermissionDialogComponent));
i.setComponent(mediaProjectionPermissionDialogComponent);
return i;
}

所以这里是创建了一个隐式的intent,用来调用系统的录屏程序。

然后正如上面的注释所说,我们通过startActivityForResult来传递这个intent,所以我们可以通过onActivityResult来获取结果,通过getMediaProjection来取出intent中的数据:

public void onActivityResult(int requestCode, int resultCode, Intent data) {
// 用户授权
if (requestCode == 100 && resultCode == Activity.RESULT_OK) {
// 获得截屏器
mediaProjection = mediaProjectionManager.getMediaProjection(resultCode, data);
LiveTaskManager.getInstance().execute(this);
}
}

image

获得MediaProjection后调用createVirtualDisplay创建虚拟显示器VirtualDisplay,即会将手机屏幕镜像到虚拟显示器上。

image

说几个createVirtualDisplay的参数含义:

  1. name: 是生成的VirtualDisplay实例的名称;
  2. width, height: 分别是生成实例的宽高,必须大于0;
  3. dpi: 生成实例的像素密度,必须大于0,一般都取1;
  4. surface: 这个比较重要,是你生成的VirtualDisplay的载体,
    我的理解是,VirtualDisplay的内容是一帧帧的屏幕截图(所以你看到是有宽高,像素密度等设置),
    所以MediaProjection获取到的其实是一帧帧的图,然后通过 surface(surface你可以理解成是android的一个画布,
    默认它会以每秒60帧来刷新,这里我们不再展开细说),来顺序播放这些图片,形成视频。

在createVirtualDisplay时,需要传递一个Surface(画布)。需要获取图像数据即可从这个Surface中读取。

// 从编码器创建一个画布, 画布上的图像会被编码器自动编码 Surface
surface = mediaCodec.createInputSurface();

编码——MediaCodec

image

MediaCodec的工作流程:首先创建出来MediaCodec后,它内部有两个队列,一个输入队列,另一个输出队列,它会不断的自动的执行一个encoding,从输入队列中取出数据进行编码,编码完成后把数据塞给输出队列,然后,我们可以借助MediaCodec里面的一些方法就能够完成编解码操作。

其中queueInputBuffer就是向输入队列(InputBuffer)赛数据,而dequeueOutPutBuffer就是从输出队列(OutputBuffer)列取数据,从输出队列取出来的数据就是我们编码之后的数据

MediaCodec 基本使用流程:

- createEncoderByType/createDecoderByType
- configure
- start
- while(true) {
- dequeueInputBuffer
- queueInputBuffer
- dequeueOutputBuffer
- releaseOutputBuffer
}
- stop
- release

MediaCodec具体详解可以查看《Android音视频(三) MediaCodec编码

VideoCodec.java

public class VideoCodec extends Thread{
private final ScreenLive screenLive;
private MediaCodec mediaCodec;
private boolean isLiving;
private long timeStamp;
private long startTime;
private MediaProjection mediaProjection;
private VirtualDisplay virtualDisplay;
public VideoCodec(ScreenLive screenLive) {
this.screenLive = screenLive;
}
public void startLive(MediaProjection mediaProjection) {
this.mediaProjection = mediaProjection;
// 配置编码参数
MediaFormat videoFormat = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, 360, 640);
//编码数据源的格式
videoFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT,
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
//码率
videoFormat.setInteger(MediaFormat.KEY_BIT_RATE, 400_000);
//帧率
videoFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 15);
//关键帧间隔,2秒
videoFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 2);
try {
// 创建编码器
mediaCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);
mediaCodec.configure(videoFormat,null,null,MediaCodec.CONFIGURE_FLAG_ENCODE);
// 从编码器创建一个画布, 画布上的图像会被编码器自动编码
Surface surface = mediaCodec.createInputSurface();
virtualDisplay = mediaProjection.createVirtualDisplay("screen-codec",
360, 640, 1,
DisplayManager.VIRTUAL_DISPLAY_FLAG_PUBLIC,
surface, null, null);
} catch (IOException e) {
e.printStackTrace();
}
start();
}
@Override
public void run() {
super.run();
isLiving = true;
mediaCodec.start();
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
//TODO mediaCodec有个关键帧问题,需要手动触发输出关键帧
while (isLiving) {
if (timeStamp != 0) {
//2000毫秒 手动触发输出关键帧
if (System.currentTimeMillis() - timeStamp >= 2_000) {
Bundle params = new Bundle();
//立即刷新 让下一帧是关键帧
params.putInt(MediaCodec.PARAMETER_KEY_REQUEST_SYNC_FRAME, 0);
mediaCodec.setParameters(params);
timeStamp = System.currentTimeMillis();
}
} else {
timeStamp = System.currentTimeMillis();
}
//获得编码之后的数据
//从输出队列获取到输出到数据
int index = mediaCodec.dequeueOutputBuffer(bufferInfo, 10);//超时时间:10微秒
if (index >= 0) {
//成功取出的编码数据
ByteBuffer buffer = mediaCodec.getOutputBuffer(index);
byte[] outData = new byte[bufferInfo.size];
buffer.get(outData);
//这样也能拿到 sps pps
// ByteBuffer sps = mediaCodec.getOutputFormat().getByteBuffer
// ("csd-0");
// ByteBuffer pps = mediaCodec.getOutputFormat().getByteBuffer
// ("csd-1");
if (startTime == 0) {
// 微妙转为毫秒
startTime = bufferInfo.presentationTimeUs / 1000;
}
RTMPPackage rtmpPackage = new RTMPPackage();
rtmpPackage.setBuffer(outData);
rtmpPackage.setType(RTMPPackage.RTMP_PACKET_TYPE_VIDEO);
long tms = (bufferInfo.presentationTimeUs / 1000) - startTime;
rtmpPackage.setTms(tms);
screenLive.addPackage(rtmpPackage);
//释放,让队列中index位置能放新数据
mediaCodec.releaseOutputBuffer(index, false);
}
}
isLiving = false;
startTime = 0;
mediaCodec.stop();
mediaCodec.release();
mediaCodec = null;
virtualDisplay.release();
virtualDisplay = null;
mediaProjection.stop();
mediaProjection = null;
}
public void stopLive(){
isLiving = false;
try {
join();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}

音频采集——AudioRecord

AudioRecord初始化需要一个相关联的声音buffer, 这个buffer主要是用来保存新的声音数据。
表明一个AudioRecord对象还没有被读取的声音数据能存放的数据量。

image

采样率:录音设备在一秒钟内对声音信号的采样次数,采样频率越高声音的还原就越真实越自然。

public class AudioCodec extends Thread{
private final ScreenLive screenLive;
private AudioRecord audioRecord;
private int sampleRate = 44100;
private MediaCodec mediaCodec;
private boolean isRecoding;
private int minBufferSize;
private long startTime;
public AudioCodec(ScreenLive screenLive) {
this.screenLive =screenLive;
}
public void startLive() {
//2:采样率,3:声道数
MediaFormat audioFormat = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, sampleRate, 1);
//编码规格,可以看成质量
audioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
//码率
audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, 64_000);
try {
mediaCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_AUDIO_AAC);
mediaCodec.configure(audioFormat,null,null,MediaCodec.CONFIGURE_FLAG_ENCODE);
mediaCodec.start();
} catch (IOException e) {
e.printStackTrace();
}
/**
* 获得创建AudioRecord所需的最小缓冲区
* 采样+单声道+16位pcm
*/
minBufferSize = AudioRecord.getMinBufferSize(sampleRate,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
/**
* 创建录音对象
* 麦克风+采样+单声道+16位pcm+缓冲区大小
*/
audioRecord = new AudioRecord(
MediaRecorder.AudioSource.MIC, //采集源,麦克风
sampleRate,//采样率
AudioFormat.CHANNEL_IN_MONO,//声道数,CHANNEL_IN_MONO:单声道,CHANNEL_IN_STEREO :双声道
AudioFormat.ENCODING_PCM_16BIT,//采样位
minBufferSize);//最小缓冲区大小
start();
}
public void stopLive(){
isRecoding = false;
try {
join();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
@Override
public void run() {
isRecoding = true;
//在获取播放的音频数据之前,先发送 audio special config
RTMPPackage rtmpPackage = new RTMPPackage();
byte[] audioDecoderSpecificInfo = {0x12, 0x08};//发送音频之前需要先发送0x12, 0x08
rtmpPackage.setBuffer(audioDecoderSpecificInfo);
rtmpPackage.setType(RTMPPackage.RTMP_PACKET_TYPE_AUDIO_HEAD);
rtmpPackage.setTms(0);
screenLive.addPackage(rtmpPackage);
audioRecord.startRecording();
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
byte[] buffer = new byte[minBufferSize];
while (isRecoding){
int len = audioRecord.read(buffer, 0, buffer.length);
if(len <=0){
continue;
}
//立即得到有效输入缓冲区
//获取输入队列中能够使用的容器的下标
int index = mediaCodec.dequeueInputBuffer(0);
if(index >=0){
ByteBuffer byteBuffer = mediaCodec.getInputBuffer(index);
byteBuffer.clear();
//把数据塞入容器
byteBuffer.put(buffer,0,len);
//填充数据后再加入队列
//通知容器我们使用完了,你可以拿去编码了
mediaCodec.queueInputBuffer(index, 0, len,
System.nanoTime() / 1000, 0);
}
//获取编码之后的数据
index = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
//每次从编码器取完,再往编码器塞数据
while (index >=0 && isRecoding){
ByteBuffer outputBuffer = mediaCodec.getOutputBuffer(index);
byte[] outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
if(startTime ==0){
startTime = bufferInfo.presentationTimeUs / 1000;
}
//送去推流
rtmpPackage = new RTMPPackage();
rtmpPackage.setBuffer(outData);
rtmpPackage.setType(RTMPPackage.RTMP_PACKET_TYPE_AUDIO_DATA);
long tms = (bufferInfo.presentationTimeUs / 1000) - startTime;
rtmpPackage.setTms(tms);
screenLive.addPackage(rtmpPackage);
//释放输出队列,让其能能存放新数据
mediaCodec.releaseOutputBuffer(index,false);
index = mediaCodec.dequeueOutputBuffer(bufferInfo,0);
}
}
audioRecord.stop();
audioRecord.release();
audioRecord = null;
mediaCodec.stop();
mediaCodec.release();
mediaCodec = null;
startTime = 0;
isRecoding = false;
}
}

RTMP音频包数据

RTMP 包中封装的音视频数据流,其实和FLV/tag封装音频和视频数据的方式是相同的,所以我们只需要按照FLV格式封装音视频即可。

具体的可以看《RTMP、x264与交叉编译》的“音频数据”

RTMP视频数据

具体的可以看《RTMP、x264与交叉编译》的“视频数据”

packt.h

#ifndef SCREENLIVE_PACKT_H
#define SCREENLIVE_PACKT_H
#include "librtmp/rtmp.h"
#include <android/log.h>
#define LOGI(...) __android_log_print(ANDROID_LOG_INFO,"RTMP",__VA_ARGS__)
typedef struct {
int16_t sps_len;
int16_t pps_len;
int8_t *sps;
int8_t *pps;
RTMP *rtmp;
} Live;
RTMPPacket * createAudioPacket(int8_t *buf, int len ,int type, long tms,Live *live){
int body_size = len + 2;//加2是表示:往音频数据前拼两个字节,表示两个标记,才符合flv/rtmp的格式
RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
RTMPPacket_Alloc(packet,body_size);
packet->m_body[0] = 0xAF;
packet->m_body[1] = 0x01;//0x01表示音频数据
if(type == 1){
//0x00表示是解码数据,
packet->m_body[1] = 0x00;
}
memcpy(&packet->m_body[2],buf,len);
packet->m_packetType = RTMP_PACKET_TYPE_AUDIO;
packet->m_nBodySize = body_size;
packet->m_nChannel = 0x05;
packet->m_nTimeStamp = tms;
packet->m_hasAbsTimestamp = 0;
packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
packet->m_nInfoField2 = live->rtmp->m_stream_id;
return packet;
}
RTMPPacket* createVideoPackage(Live *live){
int body_size = 13 + live->sps_len + 3 + live->pps_len;
RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
RTMPPacket_Alloc(packet, body_size);
int i = 0;
//AVC sequence header 与IDR一样
packet->m_body[i++] = 0x17;
//AVC sequence header 设置为0x00
packet->m_body[i++] = 0x00;
//CompositionTime
packet->m_body[i++] = 0x00;
packet->m_body[i++] = 0x00;
packet->m_body[i++] = 0x00;
//AVC sequence header
packet->m_body[i++] = 0x01; //configurationVersion 版本号 1
packet->m_body[i++] = live->sps[1]; //profile 如baseline、main、 high
packet->m_body[i++] = live->sps[2]; //profile_compatibility 兼容性
packet->m_body[i++] = live->sps[3]; //profile level
packet->m_body[i++] = 0xFF; // reserved(111111) + lengthSizeMinusOne(2位 nal 长度) 总是0xff
//sps
packet->m_body[i++] = 0xE1; //reserved(111) + lengthSizeMinusOne(5位 sps 个数) 总是0xe1
//sps length 2字节
packet->m_body[i++] = (live->sps_len >> 8) & 0xff; //第0个字节
packet->m_body[i++] = live->sps_len & 0xff; //第1个字节
memcpy(&packet->m_body[i], live->sps, live->sps_len);
i += live->sps_len;
/*pps*/
packet->m_body[i++] = 0x01; //pps number
//pps length
packet->m_body[i++] = (live->pps_len >> 8) & 0xff;
packet->m_body[i++] = live->pps_len & 0xff;
memcpy(&packet->m_body[i], live->pps, live->pps_len);
packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
packet->m_nBodySize = body_size;
packet->m_nChannel = 0x04;
packet->m_nTimeStamp = 0;
packet->m_hasAbsTimestamp = 0;
packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
packet->m_nInfoField2 = live->rtmp->m_stream_id;
return packet;
}
RTMPPacket *createVideoPackage(int8_t *buf, int len, long tms, Live *live) {
buf += 4;
len -= 4;
int body_size = len + 9;
RTMPPacket *packet = (RTMPPacket *) malloc(sizeof(RTMPPacket));
RTMPPacket_Alloc(packet, len + 9);
packet->m_body[0] = 0x27;
if (buf[0] == 0x65) { //关键帧
packet->m_body[0] = 0x17;
LOGI("发送关键帧 data");
}
packet->m_body[1] = 0x01;
packet->m_body[2] = 0x00;
packet->m_body[3] = 0x00;
packet->m_body[4] = 0x00;
//长度
packet->m_body[5] = (len >> 24) & 0xff;
packet->m_body[6] = (len >> 16) & 0xff;
packet->m_body[7] = (len >> 8) & 0xff;
packet->m_body[8] = (len) & 0xff;
//数据
memcpy(&packet->m_body[9], buf, len);
packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
packet->m_nBodySize = body_size;
packet->m_nChannel = 0x04;
packet->m_nTimeStamp = tms;
packet->m_hasAbsTimestamp = 0;
packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
packet->m_nInfoField2 = live->rtmp->m_stream_id;
return packet;
}
void prepareVideo(int8_t *buf, int len, Live *live) {
for (int i = 0; i < len; i++) {
//0x00 0x00 0x00 0x01
if (i + 4 < len) {
if (buf[i] == 0x00 && buf[i + 1] == 0x00
&& buf[i + 2] == 0x00
&& buf[i + 3] == 0x01) {
//0x00 0x00 0x00 0x01 7 sps 0x00 0x00 0x00 0x01 8 pps
//将sps pps分开
//找到pps
if (buf[i + 4] == 0x68) {
//去掉界定符
live->sps_len = i - 4;
live->sps = static_cast<int8_t *>(malloc(live->sps_len));
memcpy(live->sps, buf + 4, live->sps_len);
live->pps_len = len - (4 + live->sps_len) - 4;
live->pps = static_cast<int8_t *>(malloc(live->pps_len));
memcpy(live->pps, buf + 4 + live->sps_len + 4, live->pps_len);
LOGI("sps:%d pps:%d", live->sps_len, live->pps_len);
break;
}
}
}
}
}
#endif //SCREENLIVE_PACKT_H

native-lib.cpp

#include <jni.h>
#include <string>
#include "packt.h"
#include "librtmp/rtmp.h"
Live *live = nullptr;
extern "C"
JNIEXPORT jboolean JNICALL
Java_com_zxj_screenlive_ScreenLive_connect(JNIEnv *env, jobject thiz, jstring url_) {
const char *url = env->GetStringUTFChars(url_, 0);
int ret;
do{
live = (Live*)malloc(sizeof(Live));
memset(live,0, sizeof(Live));
live->rtmp = RTMP_Alloc();
RTMP_Init( live->rtmp);
live->rtmp->Link.timeout = 10;
LOGI("connect %s", url);
if (!(ret = RTMP_SetupURL(live->rtmp, (char*)url))) break;
RTMP_EnableWrite(live->rtmp);
LOGI("RTMP_Connect");
if (!(ret = RTMP_Connect(live->rtmp, 0))) break;
LOGI("RTMP_ConnectStream ");
if (!(ret = RTMP_ConnectStream(live->rtmp, 0))) break;
LOGI("connect success");
}while (0);
if(!ret && live){
free(live);
live = nullptr;
}
env->ReleaseStringUTFChars(url_,url);
return ret;
}
int sendPacket(RTMPPacket *packet) {
int ret = RTMP_SendPacket(live->rtmp, packet, 1);
RTMPPacket_Free(packet);
free(packet);
return ret;
}
int sendVideo(int8_t *buf, int len, long tms) {
int ret;
do {
if (buf[4] == 0x67) {//sps pps
if (live && (!live->pps || !live->sps)) {
prepareVideo(buf, len, live);
}
} else {
if (buf[4] == 0x65) {//关键帧
RTMPPacket *packet = createVideoPackage(live);
if (!(ret = sendPacket(packet))) {
break;
}
}
//将编码之后的数据 按照 flv、rtmp的格式 拼好之后
RTMPPacket *packet = createVideoPackage(buf, len, tms, live);
ret = sendPacket(packet);
}
}while (0);
return ret;
}
extern "C"
JNIEXPORT void JNICALL
Java_com_zxj_screenlive_ScreenLive_disConnect(JNIEnv *env, jobject thiz) {
if(live){
if(live->sps){
free(live->sps);
}
if(live->pps){
free(live->pps);
}
if(live->rtmp){
RTMP_Close(live->rtmp);
RTMP_Free(live->rtmp);
}
free(live);
live = nullptr;
}
}
int sendAudio(int8_t *buf, int len, int type, long tms) {
int ret;
RTMPPacket *packet = createAudioPacket(buf, len, type ,tms, live);
ret = sendPacket(packet);
return ret;
}
extern "C"
JNIEXPORT jboolean JNICALL
Java_com_zxj_screenlive_ScreenLive_sendData(JNIEnv *env, jobject instance, jbyteArray data_, jint len,
jint type, jlong tms) {
jbyte *data = env->GetByteArrayElements(data_, 0);
int ret;
switch (type){
case 0: //video
ret = sendVideo(data, len, tms);
LOGI("send Video......");
break;
default: //audio
ret = sendAudio(data, len, type, tms);
LOGI("send Audio......");
break;
}
env->ReleaseByteArrayElements(data_,data,0);
return ret;
}
posted @   咸鱼Jay  阅读(1032)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 微软正式发布.NET 10 Preview 1:开启下一代开发框架新篇章
· 没有源码,如何修改代码逻辑?
· NetPad:一个.NET开源、跨平台的C#编辑器
· PowerShell开发游戏 · 打蜜蜂
· 凌晨三点救火实录:Java内存泄漏的七个神坑,你至少踩过三个!
页脚HTML代码
点击右上角即可分享
微信分享提示
电磁波切换