Android 11 录屏同时录取麦克风以及系统里面播放的声音
学习录屏可以参考这篇文章:
[[【Android】录屏功能实现——MediaProjection_android 录屏_小叮当不懒的博客-CSDN博客]]
今天接到一个需求:完成一个录屏的Apk。在大佬的指导下,说有系统提供Api 给与开发者使用。心想着应该不会很难,就是调Api 嘛。接下来记录我踩下的坑。
自己用的是红米手机 系统Android 11 MIUI 12.5 录屏的声音来源:只有三种模式
- 无声音
- 麦克风
- 系统内录
一开始我就觉得产品提的需求录屏同时录取麦克风以及系统里面播放的声音实现不了,打算跟产品掰扯掰扯一下。结果她告诉我她的华为手机可以实现到。没办法只能求助同事大佬,他把Android SystemUI 录屏的代码发给我了、说原生的已经实现了。就有了这篇文章。
-
实现大概思路
录屏同时录取麦克风以及系统里面播放的声音
原生 SystemUI实现步骤:- MediaRecorder 录制屏幕
- AudioRecord 录制声音
- 最后将两个文件合成视频
打开一个透明的Activity 授予权限之后回调打开我们的Service 去录制屏幕
Activity 透明主题
<style name="AppTheme.Transparent" parent="AppTheme.NoActionBar">
<item name="android:windowIsTranslucent">true</item>
<item name="android:windowBackground">@android:color/transparent</item>
<item name="android:windowContentOverlay">@null</item>
<item name="android:backgroundDimEnabled">false</item>
</style>
//R.layout.activity_main
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="0dp"
android:layout_height="0dp"
android:clipChildren="false"
android:splitMotionEvents="false"
tools:context=".MainActivity">
</androidx.constraintlayout.widget.ConstraintLayout>
- 接下来是我的采坑之路
Android Q (Android10) 以上,MediaProjection必须在前台服务中进行
需要申请权限:
startForegroundService(service); 的方式启动服务
同时不要忘记startForeground(id, notification); 设置服务
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
<service
android:name=".ScreenRecorder"
android:enabled="true"
android:foregroundServiceType="mediaProjection"/>
根据是否需要麦克风以及系统声音决定录屏的参数
/**
* Audio sources */
public enum ScreenRecordingAudioSource {
NONE, //没有系统声音、没有麦克风
INTERNAL, //有系统声音、没有麦克风
MIC, //没有系统声音、有麦克风
MIC_AND_INTERNAL //有系统声音、有麦克风
}
-
录屏辅助类 ScreenRecordHelper
MediaRecorder.setAudioSource(MediaRecorder.AudioSource ) 设置声音来源
然而 MediaRecorder.AudioSource 的选项没有录取麦克风以及系统里面播放的声音。注意:
MediaRecorder.AudioSource.REMOTE_SUBMIX 可以实现内录功能,有两点比较麻烦 (可以实现、需要修改源码)
(1)需要系统权限
(2)会截走扬声器的声音,也就是说再录屏时本地无法播放声音此参数不可行、参照SystemUI需要借助录音辅助类来实现播放视频有声音以及可以录麦克风的声音
startRecordsource: ScreenRecordingAudioSource) 此方法
根据ScreenRecordingAudioSource来决定是否开启录音辅助类
同时MediaRecorder也要设置不同的参数:MediaRecorder.setAudioSource(MediaRecorder.AudioSource)MediaProjectionManager.getMediaProjection(resultCode,clonedIntent)第一次录屏没有问题,但我无法第二次录屏 。需要重新启动才可以录制
通过上网查找得知:
1、Intent数据是不能重用的,解决方案也很简单,就是使用完媒体投影后不要关闭即可。
2、因此:停止录屏的时候不可以调用 MediaProjection.stop 。我们可以在退出Service onDestroy调用。
3、按照上面的步骤我也没办法第二次录屏、乱试一下结果发现加上系统签名、声明是系统应用就可以二次录屏了
class ScreenRecordHelper constructor(
private var context: Context,
private val listener: OnVideoRecordListener?,
private val data: Intent?
) {
private val settings: Settings by lazy { Settings.getInstance(context) }
private var mediaProjectionManager: MediaProjectionManager? = null
private var mediaRecorder: MediaRecorder? = null
private var mediaProjection: MediaProjection? = null
private var virtualDisplay: VirtualDisplay? = null
private val displayMetrics: DisplayMetrics = context.resources.displayMetrics
private var saveFile: File? = null
private var fileName: String? = null
private var audioFile: File? = null
private var mAudio: ScreenInternalAudioRecorder? = null
private var source: ScreenRecordingAudioSource? = null
init {
Log.d(TAG, "init: ScreenRecordHelper")
mediaProjectionManager =
context.getSystemService(Context.MEDIA_PROJECTION_SERVICE) as? MediaProjectionManager
mediaProjection = mediaProjectionManager?.getMediaProjection(RESULT_OK, data!!)
mAudio = ScreenInternalAudioRecorder(mediaProjection, true)
}
fun setUpAudioRecorder(mic: Boolean) {
mAudio = ScreenInternalAudioRecorder(mediaProjection, mic)
}
fun startRecord(source: ScreenRecordingAudioSource) {
this.source = source
try {
if (mediaProjectionManager == null) {
Log.d(TAG, "mediaProjectionManager == null,当前装置不支持录屏")
showToast(R.string.device_not_support_screen_record)
return
}
if (source == ScreenRecordingAudioSource.NONE) {
Log.d(TAG, "startRecord: ScreenRecordingAudioSource.NONE")
if (initRecorder(false)) {
mediaRecorder?.start()
listener?.onStartRecord()
} else {
showToast(R.string.device_not_support_screen_record)
}
} else if (source == ScreenRecordingAudioSource.MIC) {
Log.d(TAG, "startRecord: ScreenRecordingAudioSource.MIC")
if (initRecorder(true)) {
mediaRecorder?.start()
listener?.onStartRecord()
} else {
showToast(R.string.device_not_support_screen_record)
}
} else if (source == ScreenRecordingAudioSource.MIC_AND_INTERNAL) {
Log.d(TAG, "startRecord: ScreenRecordingAudioSource.MIC_AND_INTERNAL")
audioFile = File.createTempFile("temp", ".aac", context.cacheDir)
mAudio!!.setupSimple(audioFile?.absolutePath, true)
if (initRecorder(false)) {
mediaRecorder?.start()
mAudio?.start()
listener?.onStartRecord()
} else {
showToast(R.string.device_not_support_screen_record)
}
} else if (source == ScreenRecordingAudioSource.INTERNAL) {
audioFile = File.createTempFile("temp", ".aac", context.cacheDir)
mAudio!!.setupSimple(audioFile?.absolutePath, false)
if (initRecorder(false)) {
mediaRecorder?.start()
mAudio?.start()
listener?.onStartRecord()
} else {
showToast(R.string.device_not_support_screen_record)
}
}
} catch (e: Exception) {
Log.d(TAG, "startRecord:error $e")
}
}
private fun showToast(resId: Int) {
val inflater: LayoutInflater = LayoutInflater.from(context)
val layout: View = inflater.inflate(R.layout.optoma_toast, null as ViewGroup?)
val textView: TextView = layout.findViewById(R.id.toast_text)
textView.setText(resId)
with(Toast.makeText(context, context.getString(resId), Toast.LENGTH_SHORT)) {
setGravity(Gravity.BOTTOM or Gravity.END, 0, 0)
view = layout
setMargin(0f, 0f)
show()
}
}
/**
* if you has parameters, the recordAudio will be invalid
* 释放资源
*/
fun stopRecord() {
try {
mediaRecorder?.apply {
setOnErrorListener(null)
setOnInfoListener(null)
setPreviewDisplay(null)
stop()
}
} catch (e: Exception) {
Log.e(TAG, "stopRecorder() error!${e.message}")
} finally {
mediaRecorder?.reset()
virtualDisplay?.release()
listener?.onEndRecord()
if (source == ScreenRecordingAudioSource.MIC_AND_INTERNAL || source == ScreenRecordingAudioSource.INTERNAL) {
mAudio?.end()
}
}
}
private fun getFormatTime(time: Long): String? {
val format = SimpleDateFormat("yyyyMMddHHMMSS", Locale.getDefault())
val d1 = Date(time)
return format.format(d1)
}
fun saveFile(b: Boolean, source: ScreenRecordingAudioSource,callBack: CallBack?) {
Thread{
val newFile = File(settings.getPathData(), "ScreenRecord_${fileName}.mp4")
if (source == ScreenRecordingAudioSource.MIC_AND_INTERNAL || source == ScreenRecordingAudioSource.INTERNAL) {
if (saveFile != null) {
val mMuxer = ScreenRecordingMuxer(
MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4,
newFile.absolutePath,
saveFile?.absolutePath,
audioFile?.absolutePath
)
mMuxer.mux()
saveFile?.delete()
audioFile?.delete()
saveFile = null
audioFile = null
}
} else if (source == ScreenRecordingAudioSource.NONE || source == ScreenRecordingAudioSource.MIC) {
saveFile?.renameTo(newFile)
}
callBack?.startFileCommand(newFile.absolutePath)
}.start()
}
interface CallBack{
fun startFileCommand(path: String)
}
/**
* 初始化录屏参数
*/
private fun initRecorder(isMic: Boolean): Boolean {
Log.d(TAG, "initRecorder")
var result = true
val f = File(settings.getPathData())
if (!f.exists()) {
f.mkdir()
}
fileName = getFormatTime(System.currentTimeMillis())
saveFile = File(settings.getPathData(), "ScreenRecord_${fileName}.tmp")
saveFile?.apply {
if (exists()) {
delete()
}
}
mediaRecorder = MediaRecorder()
val width = getVideoSizeWidth()
val height = getVideoSizeHeight()
mediaRecorder?.apply {
setVideoSource(MediaRecorder.VideoSource.SURFACE)
setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)
setVideoEncoder(MediaRecorder.VideoEncoder.H264)
if (isMic && source == ScreenRecordingAudioSource.MIC) {
setAudioSource(MediaRecorder.AudioSource.MIC)
setAudioEncoder(MediaRecorder.AudioEncoder.HE_AAC)
setAudioChannels(ScreenMediaRecorder.TOTAL_NUM_TRACKS)
setAudioEncodingBitRate(ScreenMediaRecorder.AUDIO_BIT_RATE)
setAudioSamplingRate(ScreenMediaRecorder.AUDIO_SAMPLE_RATE)
}
setOutputFile(saveFile!!.absolutePath)
setVideoSize(width, height)
setRecorderResolution(settings.getResolutionData())
setCaptureRate(VIDEO_CAPTURE_RATE)
setVideoFrameRate(VIDEO_FRAME_RATE)
try {
prepare()
virtualDisplay = mediaProjection?.createVirtualDisplay(
"MainScreen", width, height, displayMetrics.densityDpi,
DisplayManager.VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR, surface, null, null
)
} catch (e: Exception) {
Log.e(TAG, "IllegalStateException preparing MediaRecorder: ${e.message}")
e.printStackTrace()
result = false
}
}
return result
}
/**
* 退出Service释放资源
*/
fun clearAll() {
mediaRecorder?.release()
mediaRecorder = null
virtualDisplay?.release()
virtualDisplay = null
mediaProjection?.stop()
mediaProjection = null
}
private fun getVideoSizeWidth(): Int {
if (settings.getResolutionData() == Settings.RESOLUTION_1920_1080) {
return VIDEO_SIZE_MAX_WIDTH_1920
} else if (settings.getResolutionData() == Settings.RESOLUTION_1280_720) {
return VIDEO_SIZE_MAX_WIDTH_1280
}
return VIDEO_SIZE_MAX_WIDTH_1280
}
private fun getVideoSizeHeight(): Int {
if (settings.getResolutionData() == Settings.RESOLUTION_1920_1080) {
return VIDEO_SIZE_MAX_HEIGHT_1080
} else if (settings.getResolutionData() == Settings.RESOLUTION_1280_720) {
return VIDEO_SIZE_MAX_HEIGHT_720
}
return VIDEO_SIZE_MAX_HEIGHT_720
}
/**
* 设置录屏分辨率
*/
private fun setRecorderResolution(string: String) {
if (string == Settings.RESOLUTION_1920_1080) {
mediaRecorder?.setVideoEncodingBitRate(VIDEO_TIMES * VIDEO_SIZE_MAX_WIDTH_1920 * VIDEO_SIZE_MAX_HEIGHT_1080)
} else {
mediaRecorder?.setVideoEncodingBitRate(VIDEO_TIMES * VIDEO_SIZE_MAX_WIDTH_1280 * VIDEO_SIZE_MAX_HEIGHT_720)
}
}
companion object {
private const val VIDEO_TIMES = 5
private const val VIDEO_CAPTURE_RATE = 30.0
private const val VIDEO_FRAME_RATE = 30
private const val VIDEO_SIZE_MAX_WIDTH_1920 = 1920
private const val VIDEO_SIZE_MAX_HEIGHT_1080 = 1080
private const val VIDEO_SIZE_MAX_WIDTH_1280 = 1280
private const val VIDEO_SIZE_MAX_HEIGHT_720 = 720
private const val TAG = "ScreenRecordHelper"
}
interface OnVideoRecordListener {
fun onBeforeRecord()
fun onStartRecord()
fun onPauseRecord()
fun onCancelRecord()
fun onEndRecord()
}
- 录音辅助类 ScreenInternalAudioRecorder
/**
* Recording internal audio * 录音不会自己保存数据
* 1、是需要自己新开一个线程来保存录音数据
*/
public class ScreenInternalAudioRecorder {
private static final String TAG = "recorder";
private static final int TIMEOUT = 500;
private static final float MIC_VOLUME_SCALE = 1.4f;
private AudioRecord mAudioRecord;
private AudioRecord mAudioRecordMic;
private final Config mConfig = new Config();
private Thread mThread;
//Mediacodec类可用于访问底层的媒体编解码器,即编码器/解码器组件。
private MediaCodec mCodec;
private long mPresentationTime;
private long mTotalBytes;
private MediaMuxer mMuxer;
private boolean mMic;
private int mTrackId = -1;
//配置捕获其他应用程序播放的音频。
//当捕获由其他应用程序(和你的)播放的音频信号时,你将只捕获由播放器(如AudioTrack或MediaPlayer)播放的音频信号的混合
private final AudioPlaybackCaptureConfiguration playbackConfig;
public ScreenInternalAudioRecorder(MediaProjection mp, boolean includeMicInput) {
mMic = includeMicInput;
playbackConfig = new AudioPlaybackCaptureConfiguration.Builder(mp)
.addMatchingUsage(AudioAttributes.USAGE_MEDIA)
.addMatchingUsage(AudioAttributes.USAGE_UNKNOWN)
.addMatchingUsage(AudioAttributes.USAGE_GAME)
.build();
}
/**
* Audio recoding configuration
* */
public static class Config {
public int channelOutMask = AudioFormat.CHANNEL_OUT_MONO;
public int channelInMask = AudioFormat.CHANNEL_IN_MONO;
public int encoding = AudioFormat.ENCODING_PCM_16BIT;
public int sampleRate = 44100;
public int bitRate = 196000;
public int bufferSizeBytes = 1 << 17;
public boolean privileged = true;
public boolean legacy_app_looback = false;
@Override
public String toString() {
return "channelMask=" + channelOutMask
+ "\n encoding=" + encoding
+ "\n sampleRate=" + sampleRate
+ "\n bufferSize=" + bufferSizeBytes
+ "\n privileged=" + privileged
+ "\n legacy app looback=" + legacy_app_looback;
}
}
public void setupSimple(String outFile, Boolean isMic) throws IOException {
mMic = isMic;
//返回成功创建AudioRecord对象所需的最小缓冲区大小
int size = AudioRecord.getMinBufferSize(
mConfig.sampleRate, mConfig.channelInMask,
mConfig.encoding) * 2;
Log.d(TAG, "ScreenInternalAudioRecorder audio buffer size: " + size);
mMuxer = new MediaMuxer(outFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
//数据编码方式
AudioFormat format = new AudioFormat.Builder()
.setEncoding(mConfig.encoding)
.setSampleRate(mConfig.sampleRate)
.setChannelMask(mConfig.channelOutMask)
.build();
mAudioRecord = new AudioRecord.Builder()
.setAudioFormat(format)
.setAudioPlaybackCaptureConfig(playbackConfig)
.build();
if (mMic) {
/*
* 一般录音的构造方法
* audioSource 表示数据来源 一般为麦克风 MediaRecorder.AudioSource.MIC * sampleRateInHz 表示采样率 一般设置为 44100 * channelConfig 表示声道 一般设置为 AudioFormat.CHANNEL_IN_MONO * audioFormat 数据编码方式 这里使用 AudioFormat.ENCODING_PCM_16BIT * bufferSizeInBytes 数据大小 这里使用AudioRecord.getMinBufferSize 获取
*/
mAudioRecordMic = new AudioRecord(MediaRecorder.AudioSource.VOICE_COMMUNICATION,
mConfig.sampleRate, mConfig.channelInMask, mConfig.encoding, size);
}
//实例化支持给定mime类型输出数据的首选编码器、MIMETYPE_AUDIO_AAC是一种压缩格式
mCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_AUDIO_AAC);
//封装描述媒体数据格式的信息
MediaFormat medFormat = MediaFormat.createAudioFormat(
MediaFormat.MIMETYPE_AUDIO_AAC, mConfig.sampleRate, 1);
medFormat.setInteger(MediaFormat.KEY_AAC_PROFILE,
MediaCodecInfo.CodecProfileLevel.AACObjectLC);
medFormat.setInteger(MediaFormat.KEY_BIT_RATE, mConfig.bitRate);
medFormat.setInteger(MediaFormat.KEY_PCM_ENCODING, mConfig.encoding);
//配置参数
mCodec.configure(medFormat,
null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
//
mThread = new Thread(() -> {
short[] bufferInternal = null;
short[] bufferMic = null;
byte[] buffer = null;
if (mMic) {
bufferInternal = new short[size / 2];
bufferMic = new short[size / 2];
} else {
buffer = new byte[size];
}
while (true) {
int readBytes = 0;
int readShortsInternal = 0;
int readShortsMic = 0;
if (mMic && bufferInternal != null) {
readShortsInternal = mAudioRecord.read(bufferInternal, 0,
bufferInternal.length);
readShortsMic = mAudioRecordMic.read(bufferMic, 0, bufferMic.length);
// modify the volume
bufferMic = scaleValues(bufferMic,
readShortsMic, MIC_VOLUME_SCALE);
readBytes = Math.min(readShortsInternal, readShortsMic) * 2;
buffer = addAndConvertBuffers(bufferInternal, readShortsInternal, bufferMic,
readShortsMic);
} else {
readBytes = mAudioRecord.read(buffer, 0, buffer.length);
}
//exit the loop when at end of stream
if (readBytes < 0) {
Log.e(TAG, "ScreenInternalAudioRecorder read error " + readBytes +
", shorts internal: " + readShortsInternal +
", shorts mic: " + readShortsMic);
break; }
encode(buffer, readBytes);
}
endStream();
});
}
private short[] scaleValues(short[] buff, int len, float scale) {
for (int i = 0; i < len; i++) {
int oldValue = buff[i];
int newValue = (int) (buff[i] * scale);
if (newValue > Short.MAX_VALUE) {
newValue = Short.MAX_VALUE;
} else if (newValue < Short.MIN_VALUE) {
newValue = Short.MIN_VALUE;
}
buff[i] = (short) (newValue);
}
return buff;
}
private byte[] addAndConvertBuffers(short[] a1, int a1Limit, short[] a2, int a2Limit) {
int size = Math.max(a1Limit, a2Limit);
if (size < 0) return new byte[0];
byte[] buff = new byte[size * 2];
for (int i = 0; i < size; i++) {
int sum;
if (i > a1Limit) {
sum = a2[i];
} else if (i > a2Limit) {
sum = a1[i];
} else {
sum = (int) a1[i] + (int) a2[i];
}
if (sum > Short.MAX_VALUE) sum = Short.MAX_VALUE;
if (sum < Short.MIN_VALUE) sum = Short.MIN_VALUE;
int byteIndex = i * 2;
//位与(&):二元运算符,两个为1时结果为1,否则为0
buff[byteIndex] = (byte) (sum & 0xff);
//规则:符号位不变,低位溢出截断,高位用符号位填充。如:8 >> 2 = 2。
buff[byteIndex + 1] = (byte) ((sum >> 8) & 0xff);
}
return buff;
}
//编码
private void encode(byte[] buffer, int readBytes) {
int offset = 0;
while (readBytes > 0) {
int totalBytesRead = 0;
int bufferIndex = mCodec.dequeueInputBuffer(TIMEOUT);
if (bufferIndex < 0) {
writeOutput();
return; }
ByteBuffer buff = mCodec.getInputBuffer(bufferIndex);
buff.clear();
int bufferSize = buff.capacity();
int bytesToRead = Math.min(readBytes, bufferSize);
totalBytesRead += bytesToRead;
readBytes -= bytesToRead;
buff.put(buffer, offset, bytesToRead);
offset += bytesToRead;
mCodec.queueInputBuffer(bufferIndex, 0, bytesToRead, mPresentationTime, 0);
mTotalBytes += totalBytesRead;
mPresentationTime = 1000000L * (mTotalBytes / 2) / mConfig.sampleRate;
writeOutput();
}
}
private void endStream() {
int bufferIndex = mCodec.dequeueInputBuffer(TIMEOUT);
mCodec.queueInputBuffer(bufferIndex, 0, 0, mPresentationTime,
MediaCodec.BUFFER_FLAG_END_OF_STREAM);
writeOutput();
}
private void writeOutput() {
while (true) {
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
int bufferIndex = mCodec.dequeueOutputBuffer(bufferInfo, TIMEOUT);
if (bufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
mTrackId = mMuxer.addTrack(mCodec.getOutputFormat());
mMuxer.start();
continue; }
if (bufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
break;
}
if (mTrackId < 0) return;
ByteBuffer buff = mCodec.getOutputBuffer(bufferIndex);
if (!((bufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0
&& bufferInfo.size != 0)) {
mMuxer.writeSampleData(mTrackId, buff, bufferInfo);
}
mCodec.releaseOutputBuffer(bufferIndex, false);
}
}
/**
* start recording * * @throws IllegalStateException if recording fails to initialize
*/ public void start() throws IllegalStateException {
if (mThread != null) {
Log.e(TAG, "ScreenInternalAudioRecorder a recording is being done in parallel or stop is not called");
}
mAudioRecord.startRecording();
if (mMic) mAudioRecordMic.startRecording();
mCodec.start();
if (mAudioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
throw new IllegalStateException("ScreenInternalAudioRecorder Audio recording failed to start");
}
mThread.start();
}
/**
* end recording */ public void end() {
mAudioRecord.stop();
if (mMic) {
mAudioRecordMic.stop();
}
mAudioRecord.release();
if (mMic) {
mAudioRecordMic.release();
}
try {
mThread.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
mCodec.stop();
mCodec.release();
mMuxer.stop();
mMuxer.release();
mThread = null;
}
}
- 视频音频合成辅助类ScreenRecordingMuxer
/**
* Mixing audio and video tracks
* */
public class ScreenRecordingMuxer {
// size of a memory page for cache coherency
private static final int BUFFER_SIZE = 1024 * 4096;
private String[] mFiles;
private String mOutFile;
private int mFormat;
private ArrayMap<Pair<MediaExtractor, Integer>, Integer> mExtractorIndexToMuxerIndex
= new ArrayMap<>();
private ArrayList<MediaExtractor> mExtractors = new ArrayList<>();
private static String TAG = "ScreenRecordingMuxer";
public ScreenRecordingMuxer(int format, String outfileName,
String... inputFileNames) {
mFiles = inputFileNames;
mOutFile = outfileName;
mFormat = format;
Log.d(TAG, "out: " + mOutFile + " , in: " + mFiles[0]);
}
/**
* RUN IN THE BACKGROUND THREAD! */ @SuppressLint("WrongConstant")
public void mux() throws IOException {
MediaMuxer muxer = null;
muxer = new MediaMuxer(mOutFile, mFormat);
// Add extractors
for (String file : mFiles) {
MediaExtractor extractor = new MediaExtractor();
try {
extractor.setDataSource(file);
} catch (IOException e) {
Log.e(TAG, "error creating extractor: " + file);
e.printStackTrace();
continue; }
Log.d(TAG, file + " track count: " + extractor.getTrackCount());
mExtractors.add(extractor);
for (int i = 0; i < extractor.getTrackCount(); i++) {
int muxId = muxer.addTrack(extractor.getTrackFormat(i));
Log.d(TAG, "created extractor format" + extractor.getTrackFormat(i).toString());
mExtractorIndexToMuxerIndex.put(Pair.create(extractor, i), muxId);
}
}
muxer.start();
for (Pair<MediaExtractor, Integer> pair : mExtractorIndexToMuxerIndex.keySet()) {
MediaExtractor extractor = pair.first;
extractor.selectTrack(pair.second);
int muxId = mExtractorIndexToMuxerIndex.get(pair);
Log.d(TAG, "track format: " + extractor.getTrackFormat(pair.second));
extractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
ByteBuffer buffer = ByteBuffer.allocate(BUFFER_SIZE);
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
int offset;
while (true) {
offset = buffer.arrayOffset();
info.size = extractor.readSampleData(buffer, offset);
if (info.size < 0) break;
info.presentationTimeUs = extractor.getSampleTime();
info.flags = extractor.getSampleFlags();
muxer.writeSampleData(muxId, buffer, info);
extractor.advance();
}
}
for (MediaExtractor extractor : mExtractors) {
extractor.release();
}
muxer.stop();
muxer.release();
}
}
- 偏好设置保存类
class Settings(context: Context?) {
companion object {
private lateinit var settings: SharedPreferences
private const val DATA = "screen_record_settings"
private const val WARNING_DONT_SHOW= "warning_dont_show"
private const val RESOLUTION_DATA= "resolution_data"
private const val SAVE_PATH= "save_path"
private const val VIDEO_SET= "video_set"
private const val SYSTEM_VOLUME= "system_volume"
private const val MIC= "mic"
private const val AUDIO_SET= "audio_set"
private const val SERVICE_RUNNING = "service_running"
private const val SP_KEY_FILE_SERIAL_NUMBER = "sp_key_file_serial_number"
private val DEFAULT_SAVE_PATH = PathUtils.getExternalStoragePath()+"/Screen Record"
private var instance : Settings? = null
fun getInstance(context: Context): Settings {
if (instance == null) // NOT thread safe!
instance = Settings(context)
return instance!!
}
//The unit is MB
const val LOW_SPACE_STANDARD :Long = 1024
const val CANT_RECORD_STANDARD :Long = 200
const val RESOLUTION_1280_720 = "1280x720p"
const val RESOLUTION_1920_1080 = "1920x1080p"
private const val TAG="Settings"
}
var fileName: String = ""
init {
Log.d(TAG, "Settings: init")
settings = context!!.getSharedPreferences(DATA, 0)
}
fun getSystemAudio(): Boolean {
return settings.getBoolean(SYSTEM_VOLUME, true)
}
fun setSystemAudio(boolean: Boolean){
settings.edit()
.putBoolean(SYSTEM_VOLUME, boolean)
.apply()
}
fun getMic(): Boolean {
return settings.getBoolean(MIC, false)
}
fun setMic(boolean: Boolean){
settings.edit()
.putBoolean(MIC, boolean)
.apply()
}
fun saveWarningData(boolean: Boolean) {
settings.edit()
.putBoolean(WARNING_DONT_SHOW, boolean)
.apply()
}
fun getWarningData():Boolean {
return settings.getBoolean(WARNING_DONT_SHOW, false)
}
fun savePathData(string: String) {
settings.edit()
.putString(SAVE_PATH, string)
.apply()
}
fun getPathData():String {
return settings.getString(SAVE_PATH, DEFAULT_SAVE_PATH)!!
}
fun setRunningState(b:Boolean) {
settings.edit()
.putBoolean(SERVICE_RUNNING, b)
.apply()
}
fun getRunningState():Boolean {
return settings.getBoolean(SERVICE_RUNNING, false)
}
fun saveResolutionData(string: String) {
settings.edit()
.putString(RESOLUTION_DATA, string)
.apply()
}
fun getResolutionData():String {
return settings.getString(RESOLUTION_DATA, RESOLUTION_1920_1080)!!
}
//return unit is MB
fun getRemainSpace(): Long {
val external: File = Environment.getExternalStorageDirectory()
return external.freeSpace / 1000000
}
}
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· 单线程的Redis速度为什么快?
· 展开说说关于C#中ORM框架的用法!
· Pantheons:用 TypeScript 打造主流大模型对话的一站式集成库