海康威视 PS封装的码流用ffmpeg读取视频流(HEVC/H265)

用的某款海康的云台相机,由于默认的抓图API延迟有些高,而视频流延迟低很多,所以想接收视频流处理成图像。

以下仅记录个人在实现时遇到的情况。

连接相机方法参考官方文档,用的是NET_DVR_RealPlay_V40函数实现接收视频流,根据文档可知传输的数据流是PS流。NET_DVR_RealPlay_V40设置回调的话就会将原始码流传入回调函数,否则会直接用NET_DVR_PREVIEWINFO.hPlayWnd设置的窗口显示视频。
这里处理原始码流,回调函数为void CALLBACK g_RealDataCallBack_V30(LONG lRealHandle, DWORD dwDataType, BYTE* pBuffer, DWORD dwBufSize, void* dwUser),示例中用的是PlayM4_处理的。为了能转成图像,得自己处理数据流。
原始数据流存储在以pBuffer为首地址,大小为dwBufSize的内存空间。
第一次回调的dwDataType为1,是系统头数据,对转换图像没什么用。第二次dwDataType为2,根据宏定义可知为视频流数据,而且是用PS封装的。

预览回调函数
/********************预览回调函数*********************/
#define NET_DVR_SYSHEAD            1    //系统头数据
#define NET_DVR_STREAMDATA        2    //视频流数据(包括复合流和音视频分开的视频流数据)
#define NET_DVR_AUDIOSTREAMDATA    3    //音频流数据
#define NET_DVR_STD_VIDEODATA    4    //标准视频流数据
#define NET_DVR_STD_AUDIODATA    5    //标准音频流数据
#define NET_DVR_SDP             6   //SDP信息(Rstp传输时有效)
#define NET_DVR_CHANGE_FORWARD  10  //码流改变为正放  
#define NET_DVR_CHANGE_REVERSE  11  //码流改变为倒放
#define NET_DVR_PLAYBACK_ALLFILEEND      12  //回放文件结束标记
#define NET_DVR_VOD_DRAW_FRAME      13  //回放抽帧码流
#define NET_DVR_VOD_DRAW_DATA       14  //拖动平滑码流
#define NET_DVR_HLS_INDEX_DATA      15  //HLS索引数据
#define NET_DVR_PLAYBACK_NEW_POS    16  //回放重置(按时间定位命令NET_DVR_PLAYSETTIME和NET_DVR_PLAYSETTIME_V50接口返回成功后,还需要等待收到该回调类型后才可认为操作成功)
#define NET_DVR_METADATA_DATA       107  //Metadata数据
#define NET_DVR_PRIVATE_DATA    112 //私有数据,包括智能信息
`pBuffer`指向的内存数据如下:

其数据组织方式可参考PS流及其封装,大致内容如下图所示:

PS流以00 00 01 XX来标识其结构中的各个部分,我这里拿到的PS包有00 00 01 ba即PSH (Program Stream pack Header),没有System Header直接跳到了PSM包00 00 01 bc,其后是00 00 01 e0PES包,我们需要的也就是这个包中的内容,eo为视频包所以我们只需要00 00 01 e0的PES包,PES包中的数据依然是封装过的,想要处理成图像我们需要原始的H254/H265码流,这里我设置的是H265视频编码。

海康传输PS流的方法,应该是依据MPEG-2 PS标准封装的。我测试的结果是先发送一个包,包含PSH,PSM,PES,其中PE中的数据为H265的VPS(视频参数集)

紧接着每个NALU都单独作为一个PES包的内容,其后一个以00 00 01 e0起始的PES包,跳过PES header,其PES payload部分以00 00 00 01 42起始,根据H265结构可以知道该NALU为SPS(序列参数集)。
其后所有PES包都是相同结构,只是数据部分不同。在SPS后的下一个包,内容为PPS(图像参数集),再下一个包为SEI(补充增强信息)。
再后为I帧,PES payload部分以00 00 00 01 26起始,大小为5108个字节,去除12个字节的PES header,I帧正好4096个字节。
由于一个PES包不能包含全部的I帧,一个I帧可能会分成多个PES包发送。I帧的第一个PES包的数据部分以00 00 00 01 26起始,其后的PES包如果数据部分没有启始码(00 00 0100 00 00 01)说明数据是接着上一个包的,例如下面这种。

直到最后一部分I帧数据发完,然后可能会有个00 00 01 bd起始的包,此为海康私有包,对我们处理视频没什么用。

同时由于我们的BP帧间隔设置的是单P帧,还可能会遇到包含P帧的PS流,如果是包含P帧的PS流,只有一个PSH (Program Stream pack Header),然后紧接着就是PES,PES数据部分即为00 00 00 01 02,标识为P帧。
即为:

  • 第一个包,PSH + PES(包含P帧一部分数据)
  • 第二个包,PES(包含P帧余下一部分数据)

通过标识位分离出H256原始码流(即一个完整PS包的所有PES数据部分),用av_parser_parse2去解析一个完整PS流的原始H265数据能成功解析但&packet->size会为0。需要直接将原始H265码流数据传给packet这样使用后用avcodec_send_packet解析后,再通过avcodec_receive_frame获取单独帧是可以的。
为了让P帧也可以正常解析,在avcodec_receive_frame时需要有I帧及SPS,PPS等NALU的信息,avcodec_receive_frame才能从P帧数据中获取到完整一帧。

20250110更新

由于原代码只能处理I帧,P帧数据会被跳过。所以修改了一下,使代码可以读取P帧。

avcodec_send_packet读入的数据最好是完整的一个NALU(一个NALU是可能被分成几个包传输的),创建一个全局的AVCodecContext*,每读到一个完整的NALU,就交给avcodec_send_packet解码(如下图)。


图像来源:【FFmpeg实战】H264, H265硬件编解码基础及码流分析

Main.cpp
#include <stdio.h> 
#include <iostream> 
#include "Windows.h" 
#include "HCNetSDK.h" 
#include <time.h> 
#include <cstring>
#include <conio.h>
#include <windows.h>
#include <vector>
#include <fstream>
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <minwindef.h>
}
#include "cpu_h265_decoder.h"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/opencv.hpp"
#include "opencv2/imgproc/imgproc.hpp"

using namespace std;
constexpr float CONTROLSPEED = 1;
enum PTZ{ Pan, Tilt, Zoom };

void H264_Decode(vector<BYTE> videoData)
{
    AVPacket* packet = av_packet_alloc();
    packet->data = NULL;
    packet->size = 0;

    AVFrame* frame;
    frame = av_frame_alloc();
    frame->format = AV_PIX_FMT_YUV420P;
    frame->width = 640;
    frame->height = 480;

    const AVCodec* codec;
    AVCodecContext* codecContext;

    codec = avcodec_find_decoder(AV_CODEC_ID_H264);
    codecContext = avcodec_alloc_context3(codec);

    // 设置解码参数
    avcodec_open2(codecContext, codec, NULL);

    AVCodecParserContext* parserContext;
    parserContext = av_parser_init(AV_CODEC_ID_H264);

    uint8_t* inputData = videoData.data();
    int inputSize = videoData.size();

    while (inputSize > 0) {
        int parsedSize = av_parser_parse2(parserContext, codecContext, &packet->data, &packet->size,
            inputData, inputSize, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);

        inputSize -= parsedSize;
        inputData += parsedSize;

        if (packet->size > 0) {
            int ret = avcodec_send_packet(codecContext, packet);
            if (ret < 0) {
                fprintf(stderr, "Error sending packet for decoding\n");
                av_packet_unref(packet);
                continue;
            }
            // 提取解码后的帧
            while (ret >= 0) {
                ret = avcodec_receive_frame(codecContext, frame);
                if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
                    break; // 数据不足或解码结束
                }
                else if (ret < 0) {
                    fprintf(stderr, "Error while receiving a frame\n");
                    break;
                }

                // 释放帧内数据
                av_frame_unref(frame);
            }
        }
    }
}

#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>
#include <libswscale/swscale.h>

cv::Mat bgr_img;
vector<BYTE> videoData;

// 保存帧为 PNG
void Frame2PNG(AVFrame* frame, int width, int height, int frame_number) {
    // 打开 PNG 编码器
    const AVCodec* png_codec = avcodec_find_decoder(AV_CODEC_ID_PNG);
    if (!png_codec) {
        fprintf(stderr, "PNG encoder not found\n");
        return;
    }

    AVCodecContext* codec_ctx = avcodec_alloc_context3(png_codec);
    if (!codec_ctx) {
        fprintf(stderr, "Failed to allocate codec context\n");
        return;
    }

    codec_ctx->pix_fmt = AV_PIX_FMT_RGB24; // 假设目标格式为 RGB24
    codec_ctx->width = width;
    codec_ctx->height = height;
    codec_ctx->time_base.num = 1;
    codec_ctx->time_base.den = 25;

    if (avcodec_open2(codec_ctx, png_codec, NULL) < 0) {
        fprintf(stderr, "Failed to open codec\n");
        avcodec_free_context(&codec_ctx);
        return;
    }

    // 创建输出包
    AVPacket* packet = av_packet_alloc();
    if (!packet) {
        fprintf(stderr, "Failed to allocate packet\n");
        avcodec_free_context(&codec_ctx);
        return;
    }

    // 创建 RGB 帧
    AVFrame* rgb_frame = av_frame_alloc();
    rgb_frame->format = codec_ctx->pix_fmt;
    rgb_frame->width = codec_ctx->width;
    rgb_frame->height = codec_ctx->height;
    av_frame_get_buffer(rgb_frame, 0);

    AVPixelFormat temp = static_cast<AVPixelFormat>(frame->format);

    // 创建一个进行转换的上下文
    struct SwsContext* sws_ctx = sws_getContext(
        frame->width, frame->height, static_cast<AVPixelFormat>(frame->format),
        codec_ctx->width, codec_ctx->height, static_cast<AVPixelFormat>(codec_ctx->pix_fmt),
        SWS_BILINEAR, NULL, NULL, NULL);


    sws_scale(sws_ctx, (const uint8_t* const*)frame->data, frame->linesize,
        0, frame->height, rgb_frame->data, rgb_frame->linesize);

    // 计算给定像素格式和分辨率下,一帧图像的所需内存大小(字节数)
    auto rgb_out_data_size = av_image_get_buffer_size(static_cast<AVPixelFormat>(rgb_frame->format), rgb_frame->width, rgb_frame->height, 1);

    uint8_t* rgb_out_data_ = nullptr;
    rgb_out_data_ = static_cast<uint8_t*>(av_malloc(rgb_out_data_size));
    for (int y = 0; y < rgb_frame->height; y++) {
        memcpy(rgb_out_data_ + y * rgb_frame->width * 3, rgb_frame->data[0] + y * rgb_frame->linesize[0], rgb_frame->width * 3);
    }

    cv::Mat img(rgb_frame->height, rgb_frame->width, CV_8UC3, rgb_out_data_);
    
    cv::cvtColor(img, bgr_img, cv::COLOR_RGB2BGR);
    cv::imwrite(std::format("C:/Users/PC/Desktop/HEVC_DecodePicture/Frame{0}_{1}.jpg", frame_number, videoData.size()), bgr_img);
    //cv::imshow("Live Feed", bgr_img);
    //cv::waitKey(1);

    // 清理资源
    av_packet_free(&packet);
    av_frame_free(&rgb_frame);
    avcodec_free_context(&codec_ctx);
    sws_freeContext(sws_ctx);
}

AVFrame* frame;
const AVCodec* codec;
AVCodecContext* codecContext;
int frame_number;

void HEVC_Decode(vector<BYTE> videoData)
{
    // 一帧的数据
    AVPacket* packet = av_packet_alloc();

    //AVCodecParserContext* parserContext;
    //parserContext = av_parser_init(AV_CODEC_ID_HEVC);

    uint8_t* inputData = videoData.data();
    int inputSize = videoData.size();

    if (inputSize > 0) {
        
        packet->data = inputData;
        packet->size = inputSize;
        int ret = avcodec_send_packet(codecContext, packet);
        if (ret < 0) {
            fprintf(stderr, "Error sending packet for decoding\n");
            av_packet_unref(packet);
            return;
        }

        // 提取解码后的帧
        while (ret >= 0) {
            ret = avcodec_receive_frame(codecContext, frame);
            if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN)) {
                break; // 数据不足或解码结束
            }
            else if (ret < 0) {
                fprintf(stderr, "Error while receiving a frame\n");
                break;
            }
            Frame2PNG(frame, frame->width, frame->height, frame_number);
            frame_number++;
            // 释放帧内数据
            av_frame_unref(frame);
        }
    }
}

struct DEC_PTZPOS
{
    float wPanPos;
    float wTiltPos;
    float wZoomPos;
};

void CALLBACK g_ExceptionCallBack(DWORD dwType, LONG lUserID, LONG lHandle, void* pUser)
{
    char tempbuf[256] = { 0 };
    switch (dwType)
    {
    case EXCEPTION_RECONNECT:    //预览时重连 
        printf("reconnect.\n");
        break;
    default:
        break;
    }
}

LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam) {
    switch (uMsg) {
    case WM_DESTROY:
        PostQuitMessage(0);
        return 0;
    default:
        return DefWindowProc(hwnd, uMsg, wParam, lParam);
    }
}

static float rate[] = { 0.1, 1, 10, 100 };
static WORD mask = 0x000f;

//显示值(十六进制)转实际(十进制下)
float HEC2DEC(WORD hec) {
    float dec = 0;
    for (int i = 0; i < 4; ++i) {
        WORD temp = hec & mask;
        dec += temp * rate[i];
        hec >>= 4;
    }
    return dec;
}

WORD DEC2HEC(float dec) {
    int int_dec = round(dec * 10.0);
    WORD hec = 0x0000;
    int getHighestBit = 1000;
    for (int i = 0; i < 4; ++i) {
        int highestBit = int_dec / getHighestBit;
        int_dec = int_dec % getHighestBit;
        getHighestBit /= 10;
        hec += highestBit;
        if(i != 3) hec <<= 4;
    }
    return hec;
}

float clamp(float value, float min, float max) {
    if (value < min) {
        return min;
    }
    else if (value > max) {
        return max;
    }
    else {
        return value;
    }
}

float CheakScope(enum PTZ ptz, float dec, NET_DVR_PTZSCOPE m_ptzscope) {
    float res = 0;
    switch (ptz) {
    case Pan:
        res = clamp(dec, HEC2DEC(m_ptzscope.wPanPosMin), HEC2DEC(m_ptzscope.wPanPosMax));
        break;
    case Tilt:
        res = clamp(dec, HEC2DEC(m_ptzscope.wTiltPosMin), HEC2DEC(m_ptzscope.wTiltPosMax));
        break;
    case Zoom:
        res = clamp(dec, HEC2DEC(m_ptzscope.wZoomPosMin), HEC2DEC(m_ptzscope.wZoomPosMax));
        break;
    default:
        break;
    }
    return res;
}

vector<BYTE> FindPESandParseVideo(BYTE* pBuffer, DWORD dwBufSize) {
    vector<BYTE> res;
    while (dwBufSize >= 4) {
        u_int value = *(u_int*)pBuffer;
        if (value == 3758161920) {
            char headerlength = (char)*(pBuffer + 8);
            int offset = 8 + (int)headerlength + 1;
            res.insert(res.end(), pBuffer + offset, pBuffer + dwBufSize);
            break;
        }
        pBuffer++;
        dwBufSize--;
    }
    return res;
}

//0x1b:H.264  0x24:H.265
void CALLBACK g_RealDataCallBack_V30(LONG lRealHandle, DWORD dwDataType, BYTE* pBuffer, DWORD dwBufSize, void* dwUser)
{
    HWND hWnd = FindWindow("Video Stream", NULL);
    vector<BYTE> temp;

    switch (dwDataType)
    {
    case NET_DVR_SYSHEAD:
        break;
    case NET_DVR_STREAMDATA:
        if (dwBufSize > 0) {
            u_int value = *(u_int*)pBuffer;
            switch (value) {
            case 3120627712: //PS包
                /*HEVC_Decode(videoData);*/
                // 如果PS包传递的是P帧,前面的I帧/P帧不能丢弃
                temp = FindPESandParseVideo(pBuffer, dwBufSize);

                if (temp.size() < 5) {
                    std::cout << std::format("temp size is less than 5, size: {0}", temp.size()) << std::endl;
                    break;
                }

                if (temp[4] == 64 || temp[4] == 66 || temp[4] == 68 || temp[4] == 78) {
                    if (!videoData.empty()) {
                        HEVC_Decode(videoData);
                        videoData.clear();
                    }
                    HEVC_Decode(temp);
                }
                // P帧
                else {
                    if (!videoData.empty()) {
                        HEVC_Decode(videoData);
                        videoData.clear();
                    }
                    videoData.insert(videoData.end(), temp.begin(), temp.end());
                }
                //videoData.clear();
                break;
            case 3758161920: //PES
                temp = FindPESandParseVideo(pBuffer, dwBufSize);

                if (temp.size() < 5) {
                    std::cout << std::format("temp size is less than 5, size: {0}", temp.size()) << std::endl;
                    break;
                }

                if (temp[4] == 64 || temp[4] == 66 || temp[4] == 68 || temp[4] == 78) {
                    HEVC_Decode(temp);
                }
                else{
                    videoData.insert(videoData.end(), temp.begin(), temp.end());
                }

                //videoData.insert(videoData.end(), temp.begin(), temp.end());
                //HEVC_Decode(videoData);
                //videoData.clear();
                break;
            case 3170959360: //海康私有包,丢弃
                break;
            default:
                break;
            }
        }
        break;
    default:
        break;
    }
}

int main() {
    //--------------------------------------- 
    // 初始化解码器
    frame = av_frame_alloc();
    codec = avcodec_find_decoder(AV_CODEC_ID_HEVC);
    codecContext = avcodec_alloc_context3(codec);
    //打开解码器上下文
    avcodec_open2(codecContext, codec, NULL);
    codecContext->pix_fmt = AV_PIX_FMT_YUV420P;
    frame_number = 0;
    //--------------------------------------- 

    //--------------------------------------- 
    //初始化 
    NET_DVR_Init();
    //设置连接时间与重连时间 
    NET_DVR_SetConnectTime(2000, 1);
    NET_DVR_SetReconnect(10000, true);

    //--------------------------------------- 
    //设置异常消息回调函数 
    NET_DVR_SetExceptionCallBack_V30(0, NULL, g_ExceptionCallBack, NULL);

    //--------------------------------------- 
    // 注册设备 
    LONG lUserID;

    NET_DVR_USER_LOGIN_INFO struLoginInfo = { 0 };
    NET_DVR_DEVICEINFO_V40 struDeviceInfo = { 0 };

    strcpy_s(struLoginInfo.sDeviceAddress, sizeof(struLoginInfo.sDeviceAddress), "192.168.1.68"); //设备IP地址,66改造云台,68正常云台
    strcpy_s(struLoginInfo.sUserName, sizeof(struLoginInfo.sUserName), "admin");  //设备登录用户名 
    strcpy_s(struLoginInfo.sPassword, sizeof(struLoginInfo.sPassword), "Aa123456");  //设备登录密码
    struLoginInfo.wPort = 8000;
    struLoginInfo.bUseAsynLogin = 0; //同步登录,登录接口返回成功即登录成功 

    lUserID = NET_DVR_Login_V40(&struLoginInfo, &struDeviceInfo);
    if (lUserID < 0)
    {
        printf("NET_DVR_Login_V40 failed, error code: %d\n", NET_DVR_GetLastError());
        NET_DVR_Cleanup();
        return 0;
    }

    //--------------------------------------- 
    //启动预览 
    LONG lRealPlayHandle;

    // 注册窗口类
    HINSTANCE hInstance = GetModuleHandle(NULL);
    WNDCLASS wc = {};
    wc.lpfnWndProc = WindowProc;
    wc.hInstance = hInstance;
    wc.lpszClassName = "VideoWindowClass";

    if (!RegisterClass(&wc)) {
        MessageBox(NULL, "Failed to register window class", "Error", MB_ICONERROR);
    }

    HWND hWnd = CreateWindowEx(
        0,
        "VideoWindowClass",   // 使用自定义类
        "Video Stream",
        WS_OVERLAPPEDWINDOW | WS_VISIBLE,
        100, 400, 640, 480,
        NULL,
        NULL,
        hInstance,
        NULL
    );
    if (hWnd == NULL) {
        printf("Failed to create video window.\n");
    }

    NET_DVR_PREVIEWINFO struPlayInfo = { 0 };
    struPlayInfo.hPlayWnd = NULL;      //需要SDK解码时句柄设为有效值,仅取流不解码时可设为空 
    struPlayInfo.lChannel = 1;       //预览通道号 
    struPlayInfo.dwStreamType = 0;      //0-主码流,1-子码流,2-码流3,3-码流4,以此类推 
    struPlayInfo.dwLinkMode = 0;      //0- TCP方式,1- UDP方式,2- 多播方式,3- RTP方式,4-RTP/RTSP,5-RSTP/HTTP 
    struPlayInfo.bBlocked = 1;      //0- 非阻塞取流,1- 阻塞取流 

    lRealPlayHandle = NET_DVR_RealPlay_V40(lUserID, &struPlayInfo, g_RealDataCallBack_V30, NULL);
    if (lRealPlayHandle < 0)
    {
        printf("NET_DVR_RealPlay_V40 failed, error code: %d\n", NET_DVR_GetLastError());
        NET_DVR_Logout(lUserID);
        NET_DVR_Cleanup();
        return 0;
    }

    HWND hwndConsole = GetConsoleWindow();

    //设置球机位置
    NET_DVR_PTZPOS m_ptzPos;
    DEC_PTZPOS m_dec_ptzPos;
    NET_DVR_PTZSCOPE m_ptzscope;
    DWORD tmp = 0; //lpBytesReturned,[out],NET_DVR_GetDVRConfig实际收到的数据长度指针,不能为NULL

    if (!NET_DVR_GetDVRConfig(0, NET_DVR_GET_PTZPOS, 0, &m_ptzPos, sizeof(NET_DVR_PTZPOS), &tmp)) {
        printf("NET_DVR_GetDVRConfig failed, error code: %d\n", NET_DVR_GetLastError());
        NET_DVR_Cleanup();
        return 0;
    }
    if (!NET_DVR_GetDVRConfig(0, NET_DVR_GET_PTZSCOPE, 0, &m_ptzscope, sizeof(NET_DVR_PTZSCOPE), &tmp)) {
        printf("NET_DVR_GetDVRConfig failed, error code: %d\n", NET_DVR_GetLastError());
        NET_DVR_Cleanup();
        return 0;
    }

    m_dec_ptzPos.wPanPos = HEC2DEC(m_ptzPos.wPanPos);
    m_dec_ptzPos.wTiltPos = HEC2DEC(m_ptzPos.wTiltPos);
    m_dec_ptzPos.wZoomPos = HEC2DEC(m_ptzPos.wZoomPos);
    cout << "m_dec_ptzPos.wPanPos:" << m_dec_ptzPos.wPanPos << "\n"
        << "m_dec_ptzPos.wTiltPos:" << m_dec_ptzPos.wTiltPos << "\n"
        << "m_dec_ptzPos.wZoomPos:" << m_dec_ptzPos.wZoomPos << "\n" << endl;

    cout << "m_ptzscope.wPanPos:" << HEC2DEC(m_ptzscope.wPanPosMin) << " " << HEC2DEC(m_ptzscope.wPanPosMax) << "\n"
        << "m_ptzscope.wTiltPos:" << HEC2DEC(m_ptzscope.wTiltPosMin) << " " << HEC2DEC(m_ptzscope.wTiltPosMax) << "\n"
        << "m_ptzscope.wZoomPos:" << HEC2DEC(m_ptzscope.wZoomPosMin) << " " << HEC2DEC(m_ptzscope.wZoomPosMax) << "\n" <<  endl;

    MSG msg = {};
    char ch; //记录按键
    time_t lastFramTime = time(nullptr);
    while (true) {
        while (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) {
            TranslateMessage(&msg);
            DispatchMessage(&msg);

            // 如果退出消息
            if (msg.message == WM_QUIT)
                break;
        }

        float fixedTime = time(nullptr) - lastFramTime;
        lastFramTime = time(nullptr);

        if (_kbhit()) {
            ch = _getch(); // 获取按键 
            if (ch == 'q') {
                std::cout << "Exit." << std::endl;
                break;
            }
            if (ch == -32) {
                // 特殊键(方向键和功能键)返回0或224,然后会有一个第二个值
                ch = _getch();  // 获取方向键的实际值
                cout << (int)ch << endl;
                switch (ch)
                {
                case 72://UP
                    m_dec_ptzPos.wPanPos = CheakScope(Pan, m_dec_ptzPos.wPanPos + max(CONTROLSPEED * fixedTime, 10.0f), m_ptzscope);
                    break;
                case 80: //DOWN
                    m_dec_ptzPos.wPanPos = CheakScope(Pan, m_dec_ptzPos.wPanPos - max(CONTROLSPEED * fixedTime, 10.0f), m_ptzscope);
                    break;
                case 75: //LEFT
                    m_dec_ptzPos.wTiltPos = CheakScope(Tilt, m_dec_ptzPos.wTiltPos + max(CONTROLSPEED * fixedTime, 10.0f), m_ptzscope);
                    break;
                case 77: //RIGHT
                    m_dec_ptzPos.wTiltPos = CheakScope(Tilt, m_dec_ptzPos.wTiltPos - max(CONTROLSPEED * fixedTime, 10.0f), m_ptzscope);
                    break;
                default:
                    break;
                }
                m_ptzPos.wPanPos = DEC2HEC(m_dec_ptzPos.wPanPos);
                m_ptzPos.wTiltPos = DEC2HEC(m_dec_ptzPos.wTiltPos);
                m_ptzPos.wZoomPos = DEC2HEC(m_dec_ptzPos.wZoomPos);
                cout << "m_dec_ptzPos.wPanPos:" << m_dec_ptzPos.wPanPos << "\n"
                    << "m_dec_ptzPos.wTiltPos:" << m_dec_ptzPos.wTiltPos << "\n"
                    << "m_dec_ptzPos.wZoomPos:" << m_dec_ptzPos.wZoomPos << "\n" << endl;
                if (!NET_DVR_SetDVRConfig(0, NET_DVR_SET_PTZPOS, 0, &m_ptzPos, sizeof(NET_DVR_PTZPOS))) {
                    printf("NET_DVR_SetDVRConfig failed, error code: %d\n", NET_DVR_GetLastError());
                    NET_DVR_Cleanup();
                    return 0;
                }
            }
        }
    }

    //---------------------------------------
    //关闭预览 
    NET_DVR_StopRealPlay(lRealPlayHandle);
    //注销用户 
    NET_DVR_Logout(lUserID);
    //释放SDK资源 
    NET_DVR_Cleanup();
    return 0;
}
cpu_h265_decoder.cpp
// cpu_h265_decoder.cpp
#include "cpu_h265_decoder.h"

CPUDecodeH265::~CPUDecodeH265() {
    avcodec_free_context(&codec_context_ptr_);
    av_frame_free(&frame_ptr_);
    av_frame_free(&rgb_frame_ptr_);
    av_packet_free(&pkt_ptr_);
    av_free(codec_context_ptr_);
    sws_freeContext(sws_ctx);
    if (rgb_out_data_) {
        av_free(rgb_out_data_);
        rgb_out_data_ = nullptr;
    }
}

bool CPUDecodeH265::init_decoder() {
    //avcodec_register_all();
    // 查找解码器
    codec_ptr_ = avcodec_find_decoder(AV_CODEC_ID_HEVC);
    if (!codec_ptr_) {
        std::cout << "Codec not found" << std::endl;
        return false;
    }

    // 创建一个解码器上下文
    codec_context_ptr_ = avcodec_alloc_context3(codec_ptr_);
    if (!codec_context_ptr_) {
        std::cout << "Could not allocate video codec context" << std::endl;
        return false;
    }

    //    //
    //    parser_ptr_ = av_parser_init(codec_ptr_->id);
    //    if (!parser_ptr_)
    //    {
    //        std::cout<<"h265 parser not found. " << std::endl;
    //        return false;
    //    }

        // 用于将输入数据封装成一个AVPacket
    pkt_ptr_ = av_packet_alloc();
    if (!pkt_ptr_) {
        std::cout << "could not allocate a AVPacket. " << std::endl;
        return false;
    }

    //av_init_packet(pkt_ptr_);

    // 打开解码器
    auto ret = avcodec_open2(codec_context_ptr_, codec_ptr_, NULL);
    if (ret < 0) {
        std::cout << "could not open codec. " << std::endl;
        return false;
    }

    // 用于接收解码后的输出数据
    frame_ptr_ = av_frame_alloc();
    if (!frame_ptr_) {
        std::cout << "could not allocate video frame. " << std::endl;
        return false;
    }

    rgb_frame_ptr_ = av_frame_alloc();
    if (!rgb_frame_ptr_) {
        std::cout << "could not allocate video rgb frame. " << std::endl;
        return false;
    }
    rgb_frame_ptr_->format = AV_PIX_FMT_RGB24;
    return true;
}


bool CPUDecodeH265::cpu_h265_decode_process_bgr(const std::vector<uint8_t>& in_data, std::vector<uint8_t>& out_data,
    uint32_t& out_width, uint32_t& out_height, int64_t* timestamp) {
    auto src_data = in_data;
    pkt_ptr_->data = src_data.data();
    pkt_ptr_->size = src_data.size();
    int count = 0;
    if (pkt_ptr_->size) {
        std::cout << "pkt_ptr_->size = " << pkt_ptr_->size << std::endl;
        int ret = avcodec_send_packet(codec_context_ptr_, pkt_ptr_);
        std::cout << "count:" << count << ", avcodec_send_packet  ret=" << ret << std::endl;
        if (ret < 0) {
            std::cout << "Error sending a packet for decoding, the res of avcodec_send_packet=" << ret << std::endl;
            return false;
        }
        auto frame_ret = avcodec_receive_frame(codec_context_ptr_, frame_ptr_);
        std::cout << "count:" << count << ", avcodec_receive_frame  frame_ret=" << frame_ret << std::endl;
        if (frame_ret < 0) {
            std::cout << "Error during decoding. " << std::endl;
            return false;
        }
        else if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
            return false;
        }

        // init 创建一个用于图像缩放和格式转换的上下文, 并为rgb_frame分配空间
        if (sws_ctx == nullptr) {
            sws_ctx = sws_getContext(frame_ptr_->width, frame_ptr_->height, static_cast<AVPixelFormat>(frame_ptr_->format), // 源图像的宽、高和像素格式
                frame_ptr_->width, frame_ptr_->height, AV_PIX_FMT_BGR24, // 目标图像的宽、高和像素格式
                SWS_BILINEAR, nullptr, nullptr, nullptr); // 其他可选参数,这里使用默认值
            rgb_frame_ptr_->height = frame_ptr_->height;
            rgb_frame_ptr_->width = frame_ptr_->width;
            av_frame_get_buffer(rgb_frame_ptr_, 0);
        }

        sws_scale(sws_ctx, frame_ptr_->data, frame_ptr_->linesize, 0, frame_ptr_->height, rgb_frame_ptr_->data, rgb_frame_ptr_->linesize);
        //            std::cout<< "-----success to convert to rgb fame"<< std::endl;
        auto rgb_out_data_size = av_image_get_buffer_size(static_cast<AVPixelFormat>(rgb_frame_ptr_->format), rgb_frame_ptr_->width, rgb_frame_ptr_->height, 1);
        //            std::cout<< "-----rgb_out_data_size="<<rgb_out_data_size<< std::endl;
        if (rgb_out_data_ == nullptr) {
            rgb_out_data_ = static_cast<uint8_t*>(av_malloc(rgb_out_data_size));
        }
        for (int y = 0; y < rgb_frame_ptr_->height; y++) {
            memcpy(rgb_out_data_ + y * rgb_frame_ptr_->width * 3, rgb_frame_ptr_->data[0] + y * rgb_frame_ptr_->linesize[0], rgb_frame_ptr_->width * 3);
        }
        out_height = rgb_frame_ptr_->height;
        out_width = rgb_frame_ptr_->width;
        out_data.assign(rgb_out_data_, rgb_out_data_ + rgb_out_data_size);
        memset(rgb_out_data_, 0, rgb_out_data_size);
        if (!out_data.size()) {
            return false;
        }
    }
    av_packet_unref(pkt_ptr_);
    return true;
}

音视频基础:H265/HEVC&码流结构
GB28181的PS流完全分析(封装 / 分包发送 / 接收组包 / 解析)
海康PS转H264的编码思想(带图码流解释)
Gb28181之Ps流解析H264
海康摄像头PS流格式解析(RTP/PS/H264)
【FFmpeg实战】H264, H265硬件编解码基础及码流分析
Android音视频【五】H265/HEVC&码流结构
编解码相关内容:H.265/HEVC 帧内编码详解:CU/TU层次结构、预测、变换、量化、编码、编码端整体流程

posted @   溪溯P  阅读(429)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 上周热点回顾(2.24-3.2)
点击右上角即可分享
微信分享提示