ffmpeg gpu
ls ..|xargs -I @ ffmpeg -i '../@' -acodec copy -vcodec libx264 '@' -hwaccel cuvid
Duration: 00:45:30.22, start: 0.000000, bitrate: 2446 kb/s
Stream #0:0[0x1](und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 2348 kb/s, 25 fps, 25 tbr, 12800 tbn (default)
Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 93 kb/s (default)
ffmpeg -i 12.mp4 -c:a copy -c:v hevc -b:v 1200k h.mp4
-r 25
-s 1920x1080
-ss 00:00:05 -to 00:00:10
-ss 00:00:10 -t 00:05:00
-vf "hqdn3d"
-vf "eq=brightness=0.1:saturation=1.5"
-f concat -safe 0 -i filelist.txt -c copy
filelist.txt:
-map 0
-c copy
-vf "crop=400:300:100:100"
-c:v libvvenc
H.266/VVC
--enable-libvvenc
-f rawvideo -pixel_format yuv420p out.yuv
Every single time you reencode you will lose quality. If you reencode twice you will lose more quality than if you reencode once. There is absolutely no benefit for not using the original. If you tune for SSIM and then measure the SSIM you can get some idea of how much has been lost.
#git clone https://code.videolan.org/videolan/x264.git
#git clone https://bitbucket.org/multicoreware/x265_git.git
#git clone https://github.com/fraunhoferhhi/vvenc
#git clone https://code.videolan.org/videolan/dav1d.git
#git clone https://github.com/mstorsjo/fdk-aac
#curl -O -L http://downloads.sourceforge.net/project/lame/lame/3.100/lame-3.100.tar.gz && tar xzf lame-3.100.tar.gz && rm lame-3.100.tar.gz
#curl -O -L https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2 && tar xzf ffmpeg-snapshot.tar.bz2 && rm ffmpeg-snapshot.tar.bz2
#curl -O -L https://ffmpeg.org/releases/ffmpeg-7.0.2.tar.gz && tar xzf ffmpeg-7.0.2.tar.gz && rm ffmpeg-7.0.2.tar.gz
export d=$PWD
mkdir -p ffmpeg_build/lib/pkgconfig && mkdir ffmpeg_build/bin
cd vvenc && mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX="$d/ffmpeg_build" -DENABLE_SHARED:bool=on ..
make -j 8
make install
cd ../..
cd x264
PKG_CONFIG_PATH="$d/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$d/ffmpeg_build" --bindir="$d/ffmpeg_build/bin" --enable-shared --enable-debug
make -j 8
make install
cd ..
cd x265_git && mkdir builds && cd builds
cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX="$d/ffmpeg_build" -DENABLE_SHARED:bool=on ../source
make -j 8
make install
cd ../..
cd fdk-aac
autoreconf -fiv
./configure --prefix="$d/ffmpeg_build" --enable-debug
make -j 8
make install
cd ..
tar xzf ffmpeg-snapshot.tar.bz2
cd ffmpeg
PATH="$d/ffmpeg_build/bin:$PATH" PKG_CONFIG_PATH="$d/ffmpeg_build/lib/pkgconfig" ./configure \
--prefix="$d/ffmpeg_build" \
--pkg-config-flags="--static" \
--extra-cflags="-I$d/ffmpeg_build/include" \
--extra-ldflags="-L$d/ffmpeg_build/lib" \
--extra-libs=-lpthread \
--extra-libs=-lm \
--bindir="$d/ffmpeg_build/bin" \
--enable-gpl \
--enable-libx264 \
--enable-libx265 \
--enable-libvvenc \
--enable-nonfree \
--enable-libfdk_aac\
--enable-debug \
--disable-stripping\
--enable-shared
make -j 8
make install
cd ..
brew install libxml2 ffmpeg nasm # macOS-only; if on Linux, use your native package manager. Package names may differ.
git clone https://github.com/fraunhoferhhi/vvenc
git clone https://github.com/fraunhoferhhi/vvdec
git clone https://github.com/mstorsjo/fdk-aac
cd vvenc && mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local ..
sudo cmake --build . --target install -j $nproc
cd ../../
cd vvdec && mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local ..
sudo cmake --build . --target install -j $nproc
cd ../../
cd fdk-aac && ./autogen.sh && ./configure
make -j
sudo make install
cd ../
git clone --depth=1 https://github.com/MartinEesmaa/FFmpeg-VVC
cd FFmpeg-VVC
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
./configure --enable-libfdk-aac --enable-libvvenc --enable-libvvdec --enable-static --enable-pic --enable-libxml2 --pkg-config-flags="--static" --enable-sdl2
make -j
vvencapp -i input.y4m --preset slow --qpa on --qp 20 -c yuv420_10 -o output.266
./x264 input_1920x1080.yuv -o a.264
本文内容包括:
- 在Linux环境下安装FFmpeg
- 通过命令行实现视频格式识别和转码
- 有Nvidia显卡的情况下,在Linux下使用GPU进行视频转码加速的方法
FFmpeg编译安装
在FFmpeg官网https://ffmpeg.org/download.html可以下载到ubunto/debian的发行包,其他Linux发行版需自行编译。同时,如果要使用GPU进行硬件加速的话,也是必须自己编译FFmpeg的,所以本节将介绍从源码编译安装FFmpeg的方法(基于RHEL/Centos)
安装依赖工具
yum install autoconf automake bzip2 cmake freetype-devel gcc gcc-c++ git libtool make mercurial pkgconfig zlib-devel
准备工作
在$HOME下创建ffmpeg_sources目录
编译并安装依赖库
本节中的依赖库基本都是必须的,建议全部安装
nasm
汇编编译器,编译某些依赖库的时候需要
cd ~/ffmpeg_sources
curl -O -L http://www.nasm.us/pub/nasm/releasebuilds/2.13.02/nasm-2.13.02.tar.bz2
tar xjvf nasm-2.13.02.tar.bz2
cd nasm-2.13.02
./autogen.sh
./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin"
make
make install
yasm
汇编编译器,编译某些依赖库的时候需要
cd ~/ffmpeg_sources
curl -O -L http://www.tortall.net/projects/yasm/releases/yasm-1.3.0.tar.gz
tar xzvf yasm-1.3.0.tar.gz
cd yasm-1.3.0
./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin"
make
make install
libx264
H.264视频编码器,如果需要输出H.264编码的视频就需要此库,所以可以说是必备
cd ~/ffmpeg_sources
git clone --depth 1 http://git.videolan.org/git/x264
cd x264
PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --enable-static
make
make install
libx265
H.265/HEVC视频编码器。
如果不需要此编码器,可以跳过,并在ffmpeg的configure命令中移除--enable-libx265
cd ~/ffmpeg_sources
hg clone https://bitbucket.org/multicoreware/x265
cd ~/ffmpeg_sources/x265/build/linux
cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX="$HOME/ffmpeg_build" -DENABLE_SHARED:bool=off ../../source
make
make install
libfdk_acc
AAC音频编码器,必备
cd ~/ffmpeg_sources
git clone --depth 1 --branch v0.1.6 https://github.com/mstorsjo/fdk-aac.git
cd fdk-aac
autoreconf -fiv
./configure --prefix="$HOME/ffmpeg_build" --disable-shared
make
make install
libmp3lame
MP3音频编码器,必备
cd ~/ffmpeg_sources
curl -O -L http://downloads.sourceforge.net/project/lame/lame/3.100/lame-3.100.tar.gz
tar xzvf lame-3.100.tar.gz
cd lame-3.100
./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --disable-shared --enable-nasm
make
make install
libops
OPUS音频编码器
如果不需要此编码器,可以跳过,并在ffmpeg的configure命令中移除--enable-libopus
cd ~/ffmpeg_sources
curl -O -L https://archive.mozilla.org/pub/opus/opus-1.2.1.tar.gz
tar xzvf opus-1.2.1.tar.gz
cd opus-1.2.1
./configure --prefix="$HOME/ffmpeg_build" --disable-shared
make
make install
libogg
被libvorbis依赖
cd ~/ffmpeg_sources
curl -O -L http://downloads.xiph.org/releases/ogg/libogg-1.3.3.tar.gz
tar xzvf libogg-1.3.3.tar.gz
cd libogg-1.3.3
./configure --prefix="$HOME/ffmpeg_build" --disable-shared
make
make install
libvorbis
Vorbis音频编码器
如果不需要此编码器,可以跳过,并在ffmpeg的configure命令中移除--enable-libvorbis
cd ~/ffmpeg_sources
curl -O -L http://downloads.xiph.org/releases/vorbis/libvorbis-1.3.5.tar.gz
tar xzvf libvorbis-1.3.5.tar.gz
cd libvorbis-1.3.5
./configure --prefix="$HOME/ffmpeg_build" --with-ogg="$HOME/ffmpeg_build" --disable-shared
make
make install
libvpx
VP8/VP9视频编/解码器
如果不需要此编/解码器,可以跳过,并在ffmpeg的configure命令中移除--enable-libvpx
cd ~/ffmpeg_sources
git clone --depth 1 https://github.com/webmproject/libvpx.git
cd libvpx
./configure --prefix="$HOME/ffmpeg_build" --disable-examples --disable-unit-tests --enable-vp9-highbitdepth --as=yasm
make
make install
编译安装ffmpeg 3.3.8
cd ~/ffmpeg_sources
curl -O -L https://ffmpeg.org/releases/ffmpeg-3.3.8.tar.bz2
tar xjvf ffmpeg-3.3.8.tar.bz2
cd ffmpeg-3.3.8
PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure \
--prefix="$HOME/ffmpeg_build" \
--pkg-config-flags="--static" \
--extra-cflags="-I$HOME/ffmpeg_build/include" \
--extra-ldflags="-L$HOME/ffmpeg_build/lib" \
--extra-libs=-lpthread \
--extra-libs=-lm \
--bindir="$HOME/bin" \
--enable-gpl \
--enable-libfdk_aac \
--enable-libfreetype \
--enable-libmp3lame \
--enable-libopus \
--enable-libvorbis \
--enable-libvpx \
--enable-libx264 \
--enable-libx265 \
--enable-nonfree
make
make install
hash -r
验证安装
ffmpeg -h
使用FFmpeg
识别视频信息
通过ffprobe命令识别并输出视频信息
ffprobe -v error -show_streams -print_format json <input>
为方便程序解析,将视频信息输出为json格式,样例如下:
{
"streams": [
{
"index": 0,
"codec_name": "h264",
"codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
"profile": "High",
"codec_type": "video",
"codec_time_base": "61127/3668400",
"codec_tag_string": "avc1",
"codec_tag": "0x31637661",
"width": 1920,
"height": 1080,
"coded_width": 1920,
"coded_height": 1080,
"has_b_frames": 0,
"sample_aspect_ratio": "0:1",
"display_aspect_ratio": "0:1",
"pix_fmt": "yuv420p",
"level": 40,
"color_range": "tv",
"color_space": "bt709",
"color_transfer": "bt709",
"color_primaries": "bt709",
"chroma_location": "left",
"refs": 1,
"is_avc": "true",
"nal_length_size": "4",
"r_frame_rate": "30/1",
"avg_frame_rate": "1834200/61127",
"time_base": "1/600",
"start_pts": 0,
"start_time": "0.000000",
"duration_ts": 61127,
"duration": "101.878333",
"bit_rate": "16279946",
"bits_per_raw_sample": "8",
"nb_frames": "3057",
"disposition": {
"default": 1,
"dub": 0,
"original": 0,
"comment": 0,
"lyrics": 0,
"karaoke": 0,
"forced": 0,
"hearing_impaired": 0,
"visual_impaired": 0,
"clean_effects": 0,
"attached_pic": 0,
"timed_thumbnails": 0
},
"tags": {
"rotate": "90",
"creation_time": "2018-08-09T09:13:33.000000Z",
"language": "und",
"handler_name": "Core Media Data Handler",
"encoder": "H.264"
},
"side_data_list": [
{
"side_data_type": "Display Matrix",
"displaymatrix": "\n00000000: 0 65536 0\n00000001: -65536 0 0\n00000002: 70778880 0 1073741824\n",
"rotation": -90
}
]
},
{
"index": 1,
"codec_name": "aac",
"codec_long_name": "AAC (Advanced Audio Coding)",
"profile": "LC",
"codec_type": "audio",
"codec_time_base": "1/44100",
"codec_tag_string": "mp4a",
"codec_tag": "0x6134706d",
"sample_fmt": "fltp",
"sample_rate": "44100",
"channels": 1,
"channel_layout": "mono",
"bits_per_sample": 0,
"r_frame_rate": "0/0",
"avg_frame_rate": "0/0",
"time_base": "1/44100",
"start_pts": 0,
"start_time": "0.000000",
"duration_ts": 4492835,
"duration": "101.878345",
"bit_rate": "91595",
"max_bit_rate": "96000",
"nb_frames": "4390",
"disposition": {
"default": 1,
"dub": 0,
"original": 0,
"comment": 0,
"lyrics": 0,
"karaoke": 0,
"forced": 0,
"hearing_impaired": 0,
"visual_impaired": 0,
"clean_effects": 0,
"attached_pic": 0,
"timed_thumbnails": 0
},
"tags": {
"creation_time": "2018-08-09T09:13:33.000000Z",
"language": "und",
"handler_name": "Core Media Data Handler"
}
},
{
"index": 2,
"codec_type": "data",
"codec_tag_string": "mebx",
"codec_tag": "0x7862656d",
"r_frame_rate": "0/0",
"avg_frame_rate": "0/0",
"time_base": "1/600",
"start_pts": 0,
"start_time": "0.000000",
"duration_ts": 61127,
"duration": "101.878333",
"bit_rate": "119",
"nb_frames": "17",
"disposition": {
"default": 1,
"dub": 0,
"original": 0,
"comment": 0,
"lyrics": 0,
"karaoke": 0,
"forced": 0,
"hearing_impaired": 0,
"visual_impaired": 0,
"clean_effects": 0,
"attached_pic": 0,
"timed_thumbnails": 0
},
"tags": {
"creation_time": "2018-08-09T09:13:33.000000Z",
"language": "und",
"handler_name": "Core Media Data Handler"
}
},
{
"index": 3,
"codec_type": "data",
"codec_tag_string": "mebx",
"codec_tag": "0x7862656d",
"r_frame_rate": "0/0",
"avg_frame_rate": "0/0",
"time_base": "1/600",
"start_pts": 0,
"start_time": "0.000000",
"duration_ts": 61127,
"duration": "101.878333",
"nb_frames": "1",
"disposition": {
"default": 1,
"dub": 0,
"original": 0,
"comment": 0,
"lyrics": 0,
"karaoke": 0,
"forced": 0,
"hearing_impaired": 0,
"visual_impaired": 0,
"clean_effects": 0,
"attached_pic": 0,
"timed_thumbnails": 0
},
"tags": {
"creation_time": "2018-08-09T09:13:33.000000Z",
"language": "und",
"handler_name": "Core Media Data Handler"
}
}
]
}
可以看到一共返回了4个流,其中第0个是视频流,1是音频流,2和3是附加数据,没什么用
如果想指定分析视频流或音频流的话,可以加上参数-show_streams -v
或-show_streams -a
,这样就会只输出视频/音频流的分析结果
视频转码
ffmpeg -i <input> -c:v libx264 -b:v 2048k -vf scale=1280:-1 -y <output>
上述命令将输入视频转码为h264编码的视频
- -c:v:指定编码器,编码器列表可以使用ffmpeg -codecs查看
- -vf scale:指定输出视频的宽高,高-1代表按照比例自动适应
- -b:v:指定输出视频的码率,即输出视频每秒的bit数
- libx264支持的其他参数请使用
ffmpeg -h encoder=libx264
命令查询,如转码为其他编码,也可使用类似命令查询可用参数
使用Nvidia显卡GPU进行转码
重头戏来了,这块的资料相当少,我也是费了一番力气才搞定
CUDA
CUDA是Nvidia出的一个GPU计算库,让程序员可以驱动Nvidia显卡的GPU进行各种工作,其中就包含了视频的编解码
安装CUDA
首先验证一下显卡驱动是否装好
nvidia-smi
如果驱动正常的话,此命令会输出显卡的型号、驱动版本、现存/GPU占用等信息。如何安装显卡驱动本文不描述,请参考其他资料。
到CUDA官网https://developer.nvidia.com/cuda-downloads下载对应平台的发行包,这里我选择Centos7对应的rpm包cuda-repo-rhel7-9-2-local-9.2.148-1.x86_64.rpm
执行如下命令安装:
rpm -i cuda-repo-rhel7-9-2-local-9.2.148-1.x86_64.rpm
yum clean all
yum install cuda
一共大概要安装90多个依赖库,注意一下安装完成后的报告,我首次安装时有一个库不知道为什么安装失败了,又单独yum install
了该库一次才成功
验证安装
/usr/local/cuda-9.2/bin/nvcc -V
安装成功的话,会输出类似文本:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.2, V9.2.148
重新编译ffmpeg
要让ffmpeg能够使用CUDA提供的GPU编解码器,必须重新编译ffmpeg,让其能够通过动态链接调用CUDA的能力
首先要编译安装nv-codec-headers库
git clone https://git.videolan.org/git/ffmpeg/nv-codec-headers.git
make PREFIX="$HOME/ffmpeg_build" BINDDIR="$HOME/bin"
make install PREFIX="$HOME/ffmpeg_build" BINDDIR="$HOME/bin"
进入~/ffmepg_sources/ffmpeg-3.3.8/
目录重新执行ffmpeg的编译和安装
注意configure命令参数和之前configure命令参数的区别
PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure \
--prefix="$HOME/ffmpeg_build" \
--pkg-config-flags="--static" \
--extra-cflags="-I$HOME/ffmpeg_build/include -I/usr/local/cuda/include" \
--extra-ldflags="-L$HOME/ffmpeg_build/lib -L/usr/local/cuda/lib64" \
--extra-libs=-lpthread \
--extra-libs=-lm \
--bindir="$HOME/bin" \
--enable-gpl \
--enable-libfdk_aac \
--enable-libfreetype \
--enable-libmp3lame \
--enable-libopus \
--enable-libvorbis \
--enable-libvpx \
--enable-libx264 \
--enable-libx265 \
--enable-nonfree \
--enable-cuda \
--enable-cuvid \
--enable-nvenc \
--enable-libnpp
make
make install
hash -r
验证安装
重新安装完ffmpeg,使用ffmpeg -hwaccels
命令查看支持的硬件加速选项
Hardware acceleration methods:
cuvid
可以看到多出来一种叫做cuvid的硬件加速选项,这就是CUDA提供的GPU视频编解码加速选项
然后查看cuvid提供的GPU编解码器ffmpeg -codecs | grep cuvid
DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_cuvid ) (encoders: libx264 libx264rgb h264_nvenc nvenc nvenc_h264 )
DEV.L. hevc H.265 / HEVC (High Efficiency Video Coding) (decoders: hevc hevc_cuvid ) (encoders: libx265 nvenc_hevc hevc_nvenc )
DEVIL. mjpeg Motion JPEG (decoders: mjpeg mjpeg_cuvid )
DEV.L. mpeg1video MPEG-1 video (decoders: mpeg1video mpeg1_cuvid )
DEV.L. mpeg2video MPEG-2 video (decoders: mpeg2video mpegvideo mpeg2_cuvid )
DEV.L. mpeg4 MPEG-4 part 2 (decoders: mpeg4 mpeg4_cuvid )
D.V.L. vc1 SMPTE VC-1 (decoders: vc1 vc1_cuvid )
DEV.L. vp8 On2 VP8 (decoders: vp8 libvpx vp8_cuvid ) (encoders: libvpx )
DEV.L. vp9 Google VP9 (decoders: vp9 libvpx-vp9 vp9_cuvid ) (encoders: libvpx-vp9 )
所有带有"cuvid"或"nvenc"的,都是CUDA提供的GPU编解码器
可以看到,我们现在可以进行h264/hevc/mjpeg/mpeg1/mpeg2/mpeg4/vc1/vp8/vp9格式的GPU解码,以及h264/hevc格式的GPU编码
使用GPU进行视频转码
用GPU进行转码的命令和软转码命令不太一样,CPU转码的时候,我们可以依赖ffmpeg识别输入视频的编码格式并选择对应的解码器,但ffmpeg只会自动选择CPU解码器,要让ffmpeg使用GPU解码器,必须先用ffprobe识别出输入视频的编码格式,然后在命令行中指定对应的GPU解码器。
例如,将h264编码的源视频转码为指定尺寸和码率的h264编码视频:
ffmpeg -hwaccel cuvid -c:v h264_cuvid -i <input> -c:v h264_nvenc -b:v 2048k -vf scale_npp=1280:-1 -y <output>
- -hwaccel cuvid:指定使用cuvid硬件加速
- -c:v h264_cuvid:使用h264_cuvid进行视频解码
- -c:v h264_nvenc:使用h264_nvenc进行视频编码
- -vf scale_npp=1280:-1:指定输出视频的宽高,注意,这里和软解码时使用的-vf scale=x:x不一样
转码期间使用nvidia-smi查看显卡状态,能够看到ffmpeg确实是在使用GPU进行转码:
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 62543 C ffmpeg 193MiB |
+-----------------------------------------------------------------------------+
GPU转码效率测试
在配有两颗Intel-E5-2630v3 CPU和两块Nvidia Tesla M4显卡的服务器上,进行h264视频转码测试,成绩如下:
- GPU转码平均耗时:8s
- CPU转码平均耗时:25s
并行转码时,CPU软转的效率有所提高,3个转码任务并行时32颗核心全被占满,此时的成绩
- GPU转码平均耗时:8s
- CPU转码平均耗时:18s
不难看出,并行时GPU的转码速度并没有提高,可见一颗GPU同时只能执行一个转码任务。那么,如果服务器上插有多块显卡,ffmpeg是否会使用多颗GPU进行并行转码呢?
很遗憾,答案是否。
ffmpeg并不具备自动向不同GPU分配转码任务的能力,但经过一番调查后,发现可以通过-hwaccel_device参数指定转码任务使用的GPU!
向不同GPU提交转码任务
ffmpeg -hwaccel cuvid -hwaccel_device 0 -c:v h264_cuvid -i <input> -c:v h264_nvenc -b:v 2048k -vf scale_npp=1280:-1 -y <output>
ffmpeg -hwaccel cuvid -hwaccel_device 1 -c:v h264_cuvid -i <input> -c:v h264_nvenc -b:v 2048k -vf scale_npp=1280:-1 -y <output>
- -hwaccel_device N:指定某颗GPU执行转码任务,N为数字
此时nvidia-smi显示:
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 96931 C ffmpeg 193MiB |
| 1 96930 C ffmpeg 193MiB |
+-----------------------------------------------------------------------------+
可以进行并行GPU转码了!
那么在占满服务器资源时,GPU转码和CPU转码的效率如下:
- GPU转码平均耗时:4s
- CPU转码平均耗时:18s
GPU效率是CPU的4.5倍
作者:kelgon
链接:https://www.jianshu.com/p/59da3d350488
来源:简书
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· winform 绘制太阳,地球,月球 运作规律
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· AI与.NET技术实操系列(五):向量存储与相似性搜索在 .NET 中的实现
· 超详细:普通电脑也行Windows部署deepseek R1训练数据并当服务器共享给他人
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理