基于FL2440 的V4L2采集 + H264编码 + LIVE555发布的实时视频监控系统
(本文包含作者的劳动成果,仅供大家参考,若转载请注明出处!)
(贴出所有代码,当然代码不是最终版本:),甚至还有一些调试痕迹没去掉……但保证可用。)
截止到目前,这个项目基本上算是完成了。写这些文字也是为了表达当下的一点小激动,并且留下成长的足迹,留给比我还小白的小白们参考,留给未来的我和大牛们发笑……
好了,停止YY,考虑到项目涉及到的内容比较庞杂,我打算分一到多次逐渐完善她,开始侧重于介绍应用程序,以后会把环境的搭建部分逐渐完善。
第一步:准备工作。
1.首先介绍一下硬件平台:
1).开发板:FL2440;
2).USB摄像头:罗技C270;
3).笔记本:宏基4752g;
4).操作系统:CentOS;
2.开发工具:
1).交叉编译器:FL2440自带光盘中的 4.3.2版本的交叉编译器;(去飞凌开发板的官网下载FL2440自带光盘的数据)
2).看代码工具:eclipse CDT(看一些比较复杂的代码比较方便)
3.准备的源码:
1).linux-2.6.35版本的内核源码(www.kernal.org);
2).FL2440自带光盘中的2.6.28版本的文件系统源码 和对应的制作工具mkyaffs2image;
3).H264编码器最新源码;(去官网)
4).live555流媒体项目最新源码;(去官网)
第二步:linux操作系统移植。可以参考http://blog.csdn.net/yming0221/article/details/6604616
0.安装交叉编译器;参考FL2440光盘里的教程即可;
1.控制台中进入内核源码目录,执行make menuconfig (控制台尽量全屏,看着方便);
2.屏幕上有很多配置选项,至于怎么配置我们需要知道两点:
1).我们要用到操作系统的那些功能;
具体到该项目,我们大致用到操作系统的功能大致有如下几点:
a.V4L2接口,用于采集摄像头的数据;
b.UVC摄像头驱动;
c.DM9000网口芯片驱动;
d.USB摄像头驱动;
2).找到相应的配置选项决定是否选择;
要知道,这些选项并不会包含所有的驱动,它只是提供一些相对来说比较常用的驱动;而且这些选项并不是一成不变的,它会随着内核版本的变化而变化。所以我们尽量选上能找到的选项,找不到的话就改自己来修改内核代码了。
在2.6.35这个版本内核的menuconfig 里 a / b / d 都能找到,至于c,可以参考网上的一些文章哦(http://blog.csdn.net/yming0221/article/details/6604616)。
选择方法:使用方向键上下移动光标,左右移动菜单选项(select/exit/help);处于select菜单选项时,连续敲击空格键,配置选项左边的<>内会循环变化:
<>-------不选; <*>----------选择; <M>----------编译模块。我们选<*>就可以了。
3.修改Makefile然后编译内核。
1).网上有很多这类教程,这里不再赘述;
2).执行 make zImage,编译内核。
4.内核编译成功之后可以下载到板子里运行一下如果打印出类似 : Freeing init memory: 124K这一句应该就没什么大碍了。
5.制作文件系统
我们采用的是FL2440配套光盘里的2.6.28版本的文件系统源码包;省去了很多麻烦。当然需要做一些修改。
1).把交叉编译器目录下的一些库文件拷到我们的文件系统的/lib下,同名的覆盖之;
/usr/local/arm/4.3.2/arm-none-linux-gnueabi/libc/armv4t/lib下的所有文件和
/usr/local/arm/4.3.2/arm-none-linux-gnueabi/libc/armv4t/usr/lib/libstdc++.so.6
注意:我的交叉编译其安装在/usr/local/arm下。
2).交叉编译H264编码器,静态编译,把产生的.a库文件全部拷贝到我们的文件系统的/lib下;
3).交叉编译live555的源码,把产生的.a库文件全部拷贝到我们的文件系统的/lib下;
附:1)、2)、3)、保证了以后开发的程序(无论是静态编译还是动态编译)都能正常运行。当然,这么做产生的文件系统可能体积较大,如果你的板子的FLASH有坏块的话,可能烧写不成功,可以适当减小文件系统的体积再尝试烧写,比如2)、3)可以等烧写成功之后打开开发板的控制台再拷贝。
4).打开我们的文件系统的/etc/init.d/rcS 文件,看到两个IP地址了吧,那个eth0就是我们板子的IP;注释掉最后一句,那一句是初始化鼠标的,没用。
5).使用FL2440提供的制作工具mkyaffs2image,控制台下执行:mkyaffs2image qte_yaffs mfs.yaffs 。其中qte_yaffs是我们的文件系统的路径。最终生成的mfs.yaffs大约有56.4M。
6).把mfs.yaffs烧写到板子中。
6.测试移植工作是否成功
1).重启板子如果成功进入shell并且插入网线发现网口指示灯亮,最后ping笔记本的IP成功说明网口驱动移植成功;
2).插入摄像头,成功识别会打印类似如下信息:
usb 1-1.2: new full speed USB device using s3c2410-ohci and address 3
uvcvideo: Found UVC 1.00 device <unnamed> (046d:0825)
input: UVC Camera (046d:0825) as /class/input/input0
保持摄像头一直处于插入状态,进入板子的/dev目录下,执行ls video 如果有video设备最好,若没有那么参考http://www.tldp.org/HOWTO/Webcam-HOWTO/hardware.html 手动添加设备文件。
第三步:应用程序编写。(以下是大致过程,当然一些算法的处理没细说,大家可以看贴出来的代码)
1.V4L2编程,采集视频;
简单地说,这一过程就是对 /dev 下的摄像头设备文件 video 进行一系列的读写操作获取视频帧数据。
百度百科里对这一过程已有介绍。
1 #ifndef _VIDEO_CAPTURE_H 2 #define _VIDEO_CAPTURE_H 3 4 #include <linux/videodev2.h> 5 #include <pthread.h> 6 7 #define PIC_WIDTH 544 8 #define PIC_HEIGHT 288 9 #define BUF_SIZE PIC_WIDTH * PIC_HEIGHT * 2 * 2 10 //C270 YUV 4:2:2 frame size(char) 11 12 struct frame_O { 13 unsigned char data[546000] ; 14 int size; //有效 15 pthread_rwlock_t rwlock;//读写锁 16 }; 17 18 19 struct buffer { 20 u_int8_t *start; 21 u_int64_t length; 22 }; 23 24 struct cam_data{ 25 26 unsigned char cam_mbuf[BUF_SIZE] ; 27 int wpos; 28 int rpos; 29 pthread_cond_t captureOK; 30 pthread_cond_t encodeOK; 31 pthread_mutex_t lock; 32 }; 33 34 struct camera { 35 char *device_name; 36 int fd; 37 int width; 38 int height; 39 int display_depth; 40 int image_size; 41 int frame_number; 42 struct v4l2_capability v4l2_cap; 43 struct v4l2_cropcap v4l2_cropcap; 44 struct v4l2_format v4l2_fmt; 45 struct v4l2_crop crop; 46 struct buffer *buffers; 47 }; 48 49 void errno_exit(const char *s); 50 51 int xioctl(int fd, int request, void *arg); 52 53 void open_camera(struct camera *cam); 54 void close_camera(struct camera *cam); 55 56 void encode_frame(unsigned char *yuv_frame , int *wfd ,struct frame_O *frameout) ; 57 58 int buffOneFrame(struct cam_data *tmp, struct camera *cam) ; 59 60 int read_and_encode_frame(struct camera *cam); 61 62 void start_capturing(struct camera *cam); 63 void stop_capturing(struct camera *cam); 64 65 void init_camera(struct camera *cam); 66 void uninit_camera(struct camera *cam); 67 68 void init_mmap(struct camera *cam); 69 70 void v4l2_init(struct camera *cam); 71 void v4l2_close(struct camera *cam); 72 73 #endif
1 #include <asm/types.h> 2 #include <fcntl.h> 3 #include <unistd.h> 4 #include <errno.h> 5 #include <malloc.h> 6 #include <sys/stat.h> 7 #include <sys/types.h> 8 //#include <sys/time.h> 9 #include <time.h> 10 #include <sys/mman.h> 11 #include <sys/ioctl.h> 12 #include <stdio.h> 13 #include <stdlib.h> 14 #include <string.h> 15 #include <assert.h> 16 #include <linux/videodev2.h> 17 #include <dirent.h> 18 #include "video_capture.h" 19 #include "h264encoder.h" 20 21 #define CLEAR(x) memset (&(x), 0, sizeof (x)) 22 23 typedef unsigned char uint8_t; 24 25 //static char *dev_name = "/dev/video0"; 26 27 char h264_file_name[100] = "01.264\0"; 28 FILE *h264_fp; 29 uint8_t *h264_buf; 30 31 32 unsigned int n_buffers = 0; 33 DIR *dirp; 34 Encoder en; 35 36 int cnt = 0; 37 38 void errno_exit(const char *s) { 39 fprintf(stderr, "%s error %d, %s\n", s, errno, strerror(errno)); 40 exit(EXIT_FAILURE); 41 } 42 43 int xioctl(int fd, int request, void *arg) { 44 int r = 0; 45 do { 46 r = ioctl(fd, request, arg); 47 } while (-1 == r && EINTR == errno); 48 49 return r; 50 } 51 52 void open_camera(struct camera *cam) { 53 struct stat st; 54 55 if (-1 == stat(cam->device_name, &st)) { 56 fprintf(stderr, "Cannot identify '%s': %d, %s\n", cam->device_name, 57 errno, strerror(errno)); 58 exit(EXIT_FAILURE); 59 } 60 61 if (!S_ISCHR(st.st_mode)) { 62 fprintf(stderr, "%s is no device\n", cam->device_name); 63 exit(EXIT_FAILURE); 64 } 65 66 cam->fd = open(cam->device_name, O_RDWR, 0); // | O_NONBLOCK 67 68 if (-1 == cam->fd) { 69 fprintf(stderr, "Cannot open '%s': %d, %s\n", cam->device_name, errno, 70 strerror(errno)); 71 exit(EXIT_FAILURE); 72 } 73 } 74 75 void close_camera(struct camera *cam) { 76 if (-1 == close(cam->fd)) 77 errno_exit("close"); 78 79 cam->fd = -1; 80 } 81 82 void init_file() { 83 h264_fp = fopen(h264_file_name, "wa+"); 84 } 85 86 void close_file() { 87 fclose(h264_fp); 88 } 89 90 void init_encoder(struct camera *cam) { 91 compress_begin(&en, cam->width, cam->height); 92 h264_buf = (uint8_t *) malloc( 93 sizeof(uint8_t) * cam->width * cam->height * 3); // 设置缓冲区 94 95 } 96 97 void close_encoder() { 98 compress_end(&en); 99 free(h264_buf); 100 } 101 102 void encode_frame(unsigned char *yuv_frame, int *wfd ,struct frame_O *frameout) { 103 104 int h264_length = 0; 105 int tt[2]; 106 static int count=0; 107 h264_length = compress_frame(&en, -1, yuv_frame, h264_buf); 108 109 if (h264_length > 0) { 110 111 pthread_rwlock_wrlock(&(frameout->rwlock));//获取写入锁; 112 113 memcpy(frameout->data, h264_buf, h264_length); 114 115 frameout->size = h264_length; 116 117 tt[0]=h264_length; 118 119 tt[1]=count++;//count: max=2 147 483 647 120 121 //向管道写入数据,通知发送线程。 122 write(wfd[1], tt,8); 123 124 pthread_rwlock_unlock(&(frameout->rwlock));//解锁 125 126 } 127 } 128 129 int buffOneFrame(struct cam_data *tmp , struct camera *cam ) 130 { 131 unsigned char * data; 132 133 int len; 134 135 struct v4l2_buffer buf; 136 137 CLEAR(buf); 138 139 buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 140 buf.memory = V4L2_MEMORY_MMAP; 141 142 //this operator below will change buf.index and (0 <= buf.index <= 3) 143 if (-1 == ioctl(cam->fd, VIDIOC_DQBUF, &buf)) { 144 switch (errno) { 145 case EAGAIN: 146 return 0; 147 case EIO: 148 149 default: 150 errno_exit("VIDIOC_DQBUF"); 151 } 152 } 153 ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// 154 155 156 data = (unsigned char *)(cam->buffers[buf.index].start);//当前帧的首地址 157 158 len =(size_t)buf.bytesused;//当前帧的长度 159 160 if(tmp->wpos+len<=BUF_SIZE) //缓冲区剩余空间足够存放当前帧数据 161 { 162 memcpy(tmp->cam_mbuf+tmp->wpos, data ,len);//把一帧数据拷贝到缓冲区 163 164 tmp->wpos+=len; 165 } 166 167 if (-1 == ioctl(cam->fd, VIDIOC_QBUF, &buf))// 168 errno_exit("VIDIOC_QBUF"); 169 170 if(tmp->wpos+len>BUF_SIZE) //缓冲区剩余空间不够存放当前帧数据,切换下一缓冲区 171 { 172 173 return 1; 174 } 175 176 177 return 0; 178 179 180 } 181 182 183 void start_capturing(struct camera *cam) { 184 unsigned int i; 185 enum v4l2_buf_type type; 186 187 for (i = 0; i < n_buffers; ++i) { 188 struct v4l2_buffer buf; 189 190 CLEAR(buf); 191 192 buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 193 buf.memory = V4L2_MEMORY_MMAP; 194 buf.index = i; 195 196 if (-1 == xioctl(cam->fd, VIDIOC_QBUF, &buf)) 197 errno_exit("VIDIOC_QBUF"); 198 } 199 200 type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 201 202 if (-1 == xioctl(cam->fd, VIDIOC_STREAMON, &type)) 203 errno_exit("VIDIOC_STREAMON"); 204 205 } 206 207 void stop_capturing(struct camera *cam) { 208 enum v4l2_buf_type type; 209 210 type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 211 212 if (-1 == xioctl(cam->fd, VIDIOC_STREAMOFF, &type)) 213 errno_exit("VIDIOC_STREAMOFF"); 214 215 } 216 void uninit_camera(struct camera *cam) { 217 unsigned int i; 218 219 for (i = 0; i < n_buffers; ++i) 220 if (-1 == munmap(cam->buffers[i].start, cam->buffers[i].length)) 221 errno_exit("munmap"); 222 223 free(cam->buffers); 224 } 225 226 void init_mmap(struct camera *cam) { 227 struct v4l2_requestbuffers req; 228 229 CLEAR(req); 230 231 req.count = 4; 232 req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 233 req.memory = V4L2_MEMORY_MMAP; 234 235 //分配内存 236 if (-1 == xioctl(cam->fd, VIDIOC_REQBUFS, &req)) { 237 if (EINVAL == errno) { 238 fprintf(stderr, "%s does not support " 239 "memory mapping\n", cam->device_name); 240 exit(EXIT_FAILURE); 241 } else { 242 errno_exit("VIDIOC_REQBUFS"); 243 } 244 } 245 246 if (req.count < 2) { 247 fprintf(stderr, "Insufficient buffer memory on %s\n", cam->device_name); 248 exit(EXIT_FAILURE); 249 } 250 251 cam->buffers = calloc(req.count, sizeof(*(cam->buffers))); 252 253 if (!cam->buffers) { 254 fprintf(stderr, "Out of memory\n"); 255 exit(EXIT_FAILURE); 256 } 257 258 for (n_buffers = 0; n_buffers < req.count; ++n_buffers) { 259 struct v4l2_buffer buf; 260 261 CLEAR(buf); 262 263 buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 264 buf.memory = V4L2_MEMORY_MMAP; 265 buf.index = n_buffers; 266 267 //将VIDIOC_REQBUFS中分配的数据缓存转换成物理地址 268 if (-1 == xioctl(cam->fd, VIDIOC_QUERYBUF, &buf)) 269 errno_exit("VIDIOC_QUERYBUF"); 270 271 cam->buffers[n_buffers].length = buf.length; 272 cam->buffers[n_buffers].start = mmap(NULL , 273 buf.length, PROT_READ | PROT_WRITE , 274 MAP_SHARED , cam->fd, buf.m.offset); 275 276 if (MAP_FAILED == cam->buffers[n_buffers].start) 277 errno_exit("mmap"); 278 } 279 } 280 281 void init_camera(struct camera *cam) { 282 283 struct v4l2_format *fmt = &(cam->v4l2_fmt); 284 285 CLEAR(*fmt); 286 287 fmt->type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 288 fmt->fmt.pix.width = cam->width; 289 fmt->fmt.pix.height = cam->height; 290 fmt->fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV; //yuv422 291 // fmt->fmt.pix.pixelformat = V4L2_PIX_FMT_YUV420 292 fmt->fmt.pix.field = V4L2_FIELD_INTERLACED; 293 294 if (-1 == xioctl(cam->fd, VIDIOC_S_FMT, fmt)) 295 errno_exit("VIDIOC_S_FMT"); 296 297 298 init_mmap(cam); 299 300 } 301 302 void v4l2_init(struct camera *cam) { 303 open_camera(cam); 304 init_camera(cam); 305 start_capturing(cam); 306 init_encoder(cam); 307 init_file(); 308 } 309 310 void v4l2_close(struct camera *cam) { 311 stop_capturing(cam); 312 uninit_camera(cam); 313 close_camera(cam); 314 free(cam); 315 close_file(); 316 close_encoder(); 317 }
2.H264实时编码;
想要使用H264编码器,大致有这几步:
1).创建参数结构体,用于设置编码器的参数;
2).设置编码器参数;
这里没有对H264编码器进行任何的性能优化,要知道,编码器的时间性能是整个系统的瓶颈之一。
想要获得最匹配硬件的编码器性能,需要对H264进行进一步的研究及裁剪以及自定义参数设置。
首先要设置要编码的画面的长宽;
这里我给大家推荐一个傻瓜式设置方案,使用源码中自带的设置参数的函数进行设置:
找到源码目录/common下的common.c 仔细看你会发现这里面全是给编码器设置参数的函数!是不是感觉很爽?我也是磕磕绊绊许久才这道这个方便之门的。一般人我不告诉他!
这里我们使用x264_param_default_preset(en->param, "ultrafast" , "zerolatency" );这一设置方案;稍作一下解释:
参数1:我们的参数结构体;
参数2:编码速度,越慢则画面越清晰;越快画面越粗糙。考虑到s3c2440的运算速度,使用ultrafast是最快的编码速度了。当然想要获得更好的画面那就根据函数结构自己设置吧。
参数3:这个代表实时编码,咱们要得就是实时性,毫无疑问这个参数是最重要的。
3).调用编码器对一帧进行编码;
x264_encoder_encode();
1 #ifndef _H264ENCODER_H 2 #define _H264ENCODER_H 3 4 #include <stdint.h> 5 #include <stdio.h> 6 #include "include/x264.h" 7 8 typedef unsigned char uint8_t; 9 10 typedef struct { 11 x264_param_t *param; 12 x264_t *handle; 13 x264_picture_t *picture; 14 x264_nal_t *nal; 15 } Encoder; 16 17 void compress_begin(Encoder *en, int width, int height); 18 19 int compress_frame(Encoder *en, int type, uint8_t *in, uint8_t *out); 20 21 void compress_end(Encoder *en); 22 23 #endif
1 #include <stdio.h> 2 #include <stdlib.h> 3 #include <string.h> 4 #include "h264encoder.h" 5 6 7 8 void compress_begin(Encoder *en, int width, int height) { 9 10 int m_frameRate=25;//帧率 11 int m_bitRate=1000;//码率 12 en->param = (x264_param_t *) malloc(sizeof(x264_param_t)); 13 en->picture = (x264_picture_t *) malloc(sizeof(x264_picture_t)); 14 x264_param_default_preset(en->param, "ultrafast" , "zerolatency" ); //傻瓜式配置参数,选择最优速度+实时编码模式 15 16 17 en->param->i_width = width;//设置画面宽度 18 en->param->i_height = height;//设置画面高度 19 en->param->b_repeat_headers = 1; // 重复SPS/PPS 放到关键帧前面 20 en->param->b_cabac = 1; 21 en->param->i_threads = 1; 22 en->param->i_fps_num = (int)m_frameRate; 23 en->param->i_fps_den = 1; 24 en->param->i_keyint_max = 1; 25 en->param->i_log_level = X264_LOG_NONE; //不显示encoder的信息 26 27 if ((en->handle = x264_encoder_open(en->param)) == 0) { 28 return; 29 } 30 31 x264_picture_alloc(en->picture, X264_CSP_I420, en->param->i_width, 32 en->param->i_height); 33 en->picture->img.i_csp = X264_CSP_I420; 34 en->picture->img.i_plane = 3; 35 36 } 37 38 int compress_frame(Encoder *en, int type, uint8_t *in, uint8_t *out) { 39 x264_picture_t pic_out; 40 int nNal = 0; 41 int result = 0; 42 int i = 0 , j = 0 ; 43 uint8_t *p_out = out; 44 en->nal=NULL; 45 uint8_t *p422; 46 47 char *y = en->picture->img.plane[0]; 48 char *u = en->picture->img.plane[1]; 49 char *v = en->picture->img.plane[2]; 50 51 52 ///////////////////////////////////////YUV422转YUV420算法/////////////////////////////////////////// 53 int widthStep422 = en->param->i_width * 2; 54 55 for(i = 0; i < en->param->i_height; i += 2) 56 { 57 p422 = in + i * widthStep422; 58 59 for(j = 0; j < widthStep422; j+=4) 60 { 61 *(y++) = p422[j]; 62 *(u++) = p422[j+1]; 63 *(y++) = p422[j+2]; 64 } 65 66 p422 += widthStep422; 67 68 for(j = 0; j < widthStep422; j+=4) 69 { 70 *(y++) = p422[j]; 71 *(v++) = p422[j+3]; 72 *(y++) = p422[j+2]; 73 } 74 } 75 ////////////////////////////////////////////////////////////////////////////////////// 76 77 switch (type) { 78 case 0: 79 en->picture->i_type = X264_TYPE_P; 80 break; 81 case 1: 82 en->picture->i_type = X264_TYPE_IDR; 83 break; 84 case 2: 85 en->picture->i_type = X264_TYPE_I; 86 break; 87 default: 88 en->picture->i_type = X264_TYPE_AUTO; 89 break; 90 } 91 92 if (x264_encoder_encode(en->handle, &(en->nal), &nNal, en->picture, 93 &pic_out) < 0) { 94 return -1; 95 } 96 en->picture->i_pts++; 97 98 99 for (i = 0; i < nNal; i++) { 100 memcpy(p_out, en->nal[i].p_payload, en->nal[i].i_payload); 101 p_out += en->nal[i].i_payload; 102 result += en->nal[i].i_payload; 103 } 104 105 return result; 106 } 107 108 void compress_end(Encoder *en) { 109 if (en->picture) { 110 x264_picture_clean(en->picture); 111 free(en->picture); 112 en->picture = 0; 113 } 114 if (en->param) { 115 free(en->param); 116 en->param = 0; 117 } 118 if (en->handle) { 119 x264_encoder_close(en->handle); 120 } 121 free(en); 122 }
3.live555发布H264流;
我们需要编写自己的H264数据源类H264LiveVideoSource,可以继承自FramedSource类;实现doGetNextFrame()函数,该函数完成了从内存缓冲区中获取一帧数据;(具体实现可以参考ByteStreamMemoryBufferSource类的实现;/liveMedia/ByteStreamMemoryBufferSource.cpp)。
我们还需要编写自己的媒体会话类H264LiveVideoServerMediaSubsession,可以继承自OnDemandServerMediaSubsession类;实现createNewStreamSource()函数,和createNewRTPSink();函数。前者实现了创建我们自己定义的H264数据源,并交给H264VideoStreamFramer处理,最终返回一个FramedSource*;后者实现了创建RTPSink ,RTPSink 是数据的消费者,通过它可以把数据发送为RTP流。可以参考H264VideoFileServerMediaSubsession类的实现(/liveMedia/H264VideoFileServerMediaSubsession.cpp)。
数据源和数据消费者建立之后,还需要几步创建服务器发布H264流,可以参考/testProgs/testOnDemandRTSPServer.cpp中的H264部分。
1 /* 2 * H264_Live_Video_Stream.hh 3 * 4 * Created on: 2013-10-23 5 * Author: root 6 */ 7 8 #ifndef _FRAMED_FILTER_HH 9 #include "FramedSource.hh" 10 #include "H264VideoRTPSink.hh" 11 #include "H264VideoStreamFramer.hh" 12 #include "ByteStreamMemoryBufferSource.hh" 13 #include "ByteStreamFileSource.hh" 14 #include <pthread.h> 15 #endif 16 17 //********************************************************************* 18 struct frame_I{ 19 unsigned char data[546000]; 20 int size; 21 pthread_rwlock_t rwlock;//读写锁 22 }; 23 //********************************************************************* 24 25 26 class H264LiveVideoSource: public FramedSource { 27 public: 28 static H264LiveVideoSource* createNew(UsageEnvironment& env, 29 frame_I* frame, 30 Boolean deleteBufferOnClose = True, 31 unsigned preferredFrameSize = 0, 32 unsigned playTimePerFrame = 0 ,int fd_pipe[2]=NULL); 33 // "preferredFrameSize" == 0 means 'no preference' 34 // "playTimePerFrame" is in microseconds 35 36 u_int64_t bufferSize() const { return fBuffer->size; } 37 38 void seekToByteAbsolute(u_int64_t byteNumber, u_int64_t numBytesToStream = 0); 39 // if "numBytesToStream" is >0, then we limit the stream to that number of bytes, before treating it as EOF 40 void seekToByteRelative(int64_t offset); 41 42 protected: 43 H264LiveVideoSource(UsageEnvironment& env, 44 frame_I* frame, 45 Boolean deleteBufferOnClose, 46 unsigned preferredFrameSize, 47 unsigned playTimePerFrame ,int pipe[2]); 48 // called only by createNew() 49 50 virtual ~H264LiveVideoSource(); 51 52 private: 53 // redefined virtual functions: 54 virtual void doGetNextFrame(); 55 56 private: 57 int fd[2]; 58 frame_I* fBuffer; 59 Boolean fDeleteBufferOnClose; 60 unsigned fPreferredFrameSize; 61 unsigned fPlayTimePerFrame; 62 unsigned fLastPlayTime; 63 Boolean fLimitNumBytesToStream; 64 u_int64_t fNumBytesToStream; // used iff "fLimitNumBytesToStream" is True 65 }; 66 67 68 ////////////////////////////////////////////////////////////////////////////////////////////////////// 69 //////////////////////////////////////H264LiveVideoServerMediaSubsession////////////////////////////// 70 ////////////////////////////////////////////////////////////////////////////////////////////////////// 71 72 #ifndef H264_LIVE_VIDEO_STREAM_HH_ 73 #include "H264VideoFileServerMediaSubsession.hh" 74 #define H264_LIVE_VIDEO_STREAM_HH_ 75 76 77 78 class H264LiveVideoServerMediaSubsession: public OnDemandServerMediaSubsession{ 79 public: 80 static H264LiveVideoServerMediaSubsession* 81 createNew( UsageEnvironment& env, Boolean reuseFirstSource ,frame_I* frame, int fifo[2]); 82 void checkForAuxSDPLine1(); 83 void afterPlayingDummy1(); 84 private: 85 H264LiveVideoServerMediaSubsession( 86 UsageEnvironment& env, Boolean reuseFirstSource , 87 frame_I* frame, int fifo[2]); 88 virtual ~H264LiveVideoServerMediaSubsession(); 89 90 void setDoneFlag() { fDoneFlag = ~0; } 91 92 private: // redefined virtual functions 93 virtual FramedSource* createNewStreamSource(unsigned clientSessionId, 94 unsigned& estBitrate); 95 virtual RTPSink* createNewRTPSink(Groupsock* rtpGroupsock, 96 unsigned char rtpPayloadTypeIfDynamic, 97 FramedSource* inputSource); 98 virtual char const* getAuxSDPLine(RTPSink* rtpSink, 99 FramedSource* inputSource); 100 101 private: 102 frame_I *pNal; 103 int m_fifo[2]; 104 char* fAuxSDPLine; 105 char fDoneFlag; // used when setting up "fAuxSDPLine" 106 RTPSink* fDummyRTPSink; // ditto 107 108 }; 109 110 111 #endif /* H264_LIVE_VIDEO_STREAM_HH_ */
1 #include <GroupsockHelper.hh> 2 #include "H264_Live_Video_Stream.hh" 3 //////////////////////////////////////////////////////////////////////////////////////////////// 4 //////////////////////////////////////// H264LiveVideoSource /////////////////////////////////// 5 //////////////////////////////////////////////////////////////////////////////////////////////// 6 H264LiveVideoSource* 7 H264LiveVideoSource::createNew(UsageEnvironment& env, 8 frame_I* frame, 9 Boolean deleteBufferOnClose, 10 unsigned preferredFrameSize, 11 unsigned playTimePerFrame , 12 int fd_pipe[2]) { 13 14 if (frame == NULL) return NULL; 15 16 H264LiveVideoSource* videosource = new H264LiveVideoSource(env, 17 frame, deleteBufferOnClose, 18 preferredFrameSize, playTimePerFrame ,fd_pipe); 19 20 return videosource; 21 22 } 23 24 H264LiveVideoSource::H264LiveVideoSource(UsageEnvironment& env, 25 frame_I* frame, 26 Boolean deleteBufferOnClose, 27 unsigned preferredFrameSize, 28 unsigned playTimePerFrame,int fd_pipe[2]) 29 : FramedSource(env), fDeleteBufferOnClose(deleteBufferOnClose), 30 fPreferredFrameSize(preferredFrameSize), fPlayTimePerFrame(playTimePerFrame), fLastPlayTime(0){ 31 32 fBuffer = (struct frame_I*) malloc(sizeof(struct frame_I*)); 33 fBuffer=frame; 34 fd[0] = fd_pipe[0]; 35 fd[1] = fd_pipe[1]; 36 37 } 38 39 H264LiveVideoSource::~H264LiveVideoSource() { 40 41 if (fDeleteBufferOnClose) delete[] fBuffer; 42 } 43 44 void H264LiveVideoSource::seekToByteAbsolute(u_int64_t byteNumber, u_int64_t numBytesToStream) { 45 46 47 } 48 49 void H264LiveVideoSource::seekToByteRelative(int64_t offset) { 50 51 52 53 } 54 55 void H264LiveVideoSource::doGetNextFrame() { 56 57 58 int ss[2]; 59 60 read(fd[0],ss,8);//读管道,读出内容为当前缓冲区的有效长度 61 62 //如果数据源不可用则返回 63 if (0 ) { 64 handleClosure(this); 65 return; 66 } 67 pthread_rwlock_rdlock(&(fBuffer->rwlock));//获取读取锁 68 fFrameSize = ss[0]; 69 70 if (fFrameSize > fMaxSize) { 71 fNumTruncatedBytes = fFrameSize - fMaxSize; 72 fFrameSize = fMaxSize; 73 } else { 74 fNumTruncatedBytes = 0; 75 } 76 77 memmove(fTo, fBuffer->data, fFrameSize); 78 if(fNumTruncatedBytes > 0 && fNumTruncatedBytes <= fFrameSize) 79 memmove(fTo, fBuffer->data + fFrameSize , fNumTruncatedBytes); 80 81 82 pthread_rwlock_unlock(&(fBuffer->rwlock));//解锁 83 84 if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) { 85 86 if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) { 87 88 // This is the first frame, so use the current time: 89 gettimeofday(&fPresentationTime, NULL); 90 } else { 91 92 // Increment by the play time of the previous data: 93 unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime; 94 fPresentationTime.tv_sec += uSeconds/1000000; 95 fPresentationTime.tv_usec = uSeconds%1000000; 96 97 } 98 99 // Remember the play time of this data: 100 fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize; 101 fDurationInMicroseconds = fLastPlayTime; 102 } else { 103 // We don't know a specific play time duration for this data, 104 // so just record the current time as being the 'presentation time': 105 gettimeofday(&fPresentationTime, NULL); 106 } 107 108 FramedSource::afterGetting(this); 109 110 } 111 112 113 ////////////////////////////////////////////////////////////////////////////////////////////////////// 114 //////////////////////////////////////H264LiveVideoServerMediaSubsession////////////////////////////// 115 ////////////////////////////////////////////////////////////////////////////////////////////////////// 116 H264LiveVideoServerMediaSubsession* 117 H264LiveVideoServerMediaSubsession::createNew( 118 UsageEnvironment& env, Boolean reuseFirstSource , 119 frame_I* frame,int fifo[2]) { 120 121 return new H264LiveVideoServerMediaSubsession(env, reuseFirstSource,frame,fifo); 122 } 123 124 static void checkForAuxSDPLine(void* clientData) { 125 126 H264LiveVideoServerMediaSubsession* subsess = (H264LiveVideoServerMediaSubsession*)clientData; 127 subsess->checkForAuxSDPLine1(); 128 } 129 130 131 void H264LiveVideoServerMediaSubsession::checkForAuxSDPLine1() { 132 char const* dasl; 133 134 if (fAuxSDPLine != NULL) { 135 // Signal the event loop that we're done: 136 setDoneFlag(); 137 } else if (fDummyRTPSink != NULL && (dasl = fDummyRTPSink->auxSDPLine()) != NULL) { 138 fAuxSDPLine = strDup(dasl); 139 fDummyRTPSink = NULL; 140 141 // Signal the event loop that we're done: 142 setDoneFlag(); 143 } else { 144 // try again after a brief delay: 145 int uSecsToDelay = 100000; // 100 ms 146 nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecsToDelay, 147 (TaskFunc*)checkForAuxSDPLine, this); 148 } 149 } 150 151 152 void H264LiveVideoServerMediaSubsession::afterPlayingDummy1() { 153 154 // Unschedule any pending 'checking' task: 155 envir().taskScheduler().unscheduleDelayedTask(nextTask()); 156 // Signal the event loop that we're done: 157 setDoneFlag(); 158 } 159 160 161 H264LiveVideoServerMediaSubsession::H264LiveVideoServerMediaSubsession( 162 UsageEnvironment& env, Boolean reuseFirstSource , 163 frame_I* frame, int fifo[2]) : 164 OnDemandServerMediaSubsession(env, reuseFirstSource), 165 fAuxSDPLine(NULL), fDoneFlag(0), fDummyRTPSink(NULL) { 166 167 pNal = (struct frame_I*) malloc(sizeof(struct frame_I*)); 168 pNal = frame; 169 m_fifo[0] = fifo[0]; 170 m_fifo[1] = fifo[1]; 171 172 } 173 174 175 H264LiveVideoServerMediaSubsession::~H264LiveVideoServerMediaSubsession() { 176 177 delete[] fAuxSDPLine;delete []pNal; 178 } 179 180 181 static void afterPlayingDummy(void* clientData) { 182 183 H264LiveVideoServerMediaSubsession* subsess = (H264LiveVideoServerMediaSubsession*)clientData; 184 subsess->afterPlayingDummy1(); 185 } 186 187 188 char const* H264LiveVideoServerMediaSubsession::getAuxSDPLine(RTPSink* rtpSink, FramedSource* inputSource) { 189 190 if (fAuxSDPLine != NULL) return fAuxSDPLine; // it's already been set up (for a previous client) 191 192 if (fDummyRTPSink == NULL) { // we're not already setting it up for another, concurrent stream 193 // Note: For H264 video files, the 'config' information ("profile-level-id" and "sprop-parameter-sets") isn't known 194 // until we start reading the file. This means that "rtpSink"s "auxSDPLine()" will be NULL initially, 195 // and we need to start reading data from our file until this changes. 196 fDummyRTPSink = rtpSink; 197 198 // Start reading the file: 199 fDummyRTPSink->startPlaying(*inputSource, afterPlayingDummy, this); 200 201 // Check whether the sink's 'auxSDPLine()' is ready: 202 checkForAuxSDPLine(this); 203 } 204 205 envir().taskScheduler().doEventLoop(&fDoneFlag); 206 207 return fAuxSDPLine; 208 } 209 210 FramedSource* H264LiveVideoServerMediaSubsession::createNewStreamSource( 211 unsigned /*clientSessionId*/, unsigned& estBitrate) { 212 213 estBitrate = 500; 214 H264LiveVideoSource *buffsource = H264LiveVideoSource:: 215 createNew(envir() ,pNal, false, 15000,40,m_fifo); 216 217 if (buffsource == NULL) return NULL; 218 219 FramedSource* videoES = buffsource; 220 221 H264VideoStreamFramer* videoSource = H264VideoStreamFramer::createNew(envir(), videoES); 222 223 return videoSource; 224 225 } 226 227 // 实例化SDP 228 RTPSink* H264LiveVideoServerMediaSubsession::createNewRTPSink( 229 Groupsock* rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, 230 FramedSource* /*inputSource*/) { 231 232 return H264VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic); 233 234 }
头文件option.h包含了一些比较重要的参数的设置。
1 #ifndef OPTION_H 2 #define OPTION_H 3 4 #define PIC_WIDTH 320//画面宽度 5 #define PIC_HEIGHT 240//画面高度 6 #define YUV_FRAME_SIZE PIC_WIDTH * PIC_HEIGHT * 2//YUV422一帧数据的大小 7 #define BUF_SIZE YUV_FRAME_SIZE * 4//缓冲区存储4帧数据 8 #define ENCODE_SIZE (PIC_WIDTH * PIC_HEIGHT * 2)//编码之后的一帧缓存 9 #define CAM_NAME "/dev/video1"//摄像头设备文件名 10 #define DelayTime 40*1000//(50us*1000=0.04s 25f/s) 11 #define NOCONTROL//不开启网络控制功能 12 //#define CONTROL//开启网络控制功能 13 /* 14 C270支持的所有分辨率: 15 160*120;176*144;320*176;320*240;352*288;432*240;544*288; 16 640*360;640*480;752*416;800*448;800*600;864*480;960*544; 17 960*720;1024*576;1184*656;1280*720;1280*960 18 */ 19 #endif
要知道,一旦一个流媒体会话被建立,doGetNextFrame()函数就会被循环调用,这时就需要我们想办法令它和编码线程进行同步并且保证缓冲区的读写安全。
下面贴出main():
1 #include <sys/types.h> 2 #include <sys/socket.h> 3 #include <stdio.h> 4 #include <netinet/in.h> 5 #include <arpa/inet.h> 6 #include <unistd.h> 7 #include <string.h> 8 #include <netdb.h> 9 #include <sys/ioctl.h> 10 #include <termios.h> 11 #include <stdlib.h> 12 #include <sys/stat.h> 13 #include <fcntl.h> 14 #include <signal.h> 15 #include <sys/time.h> 16 #include <stddef.h> 17 #include <unistd.h> 18 #include <time.h> 19 20 #include "liveMedia.hh" 21 #include "BasicUsageEnvironment.hh" 22 #include "GroupsockHelper.hh" 23 #include "H264_Live_Video_Stream.hh" 24 extern "C" 25 { 26 #include "video_capture.h" 27 #include "h264encoder.h" 28 } 29 #include "version.hh" 30 31 32 struct cam_data Buff[2];//采集线程缓冲区 33 34 pthread_t thread[3];//三个线程分别为:采集线程,编码线程,发布线程 35 int volatile flag[2];//状态标志位 36 int framelength = 0;// 37 struct frame_O tmp[1];//存储一帧数据 38 int pipefd[2]; 39 40 struct camera *cam; 41 42 int init(); 43 static void initBuff(struct cam_data *c); 44 static void initBuff1(struct frame_O *c); 45 void afterPlaying(); 46 void play(); 47 void *video_Capture_Thread(void*); 48 void *video_Encode_Thread(void*); 49 void *live555_trans_Thread(void*); 50 void thread_create(void); 51 void thread_wait(void); 52 static void announceStream(RTSPServer* rtspServer, ServerMediaSession* sms, 53 char const* streamName, char const* inputFileName); // fwd 54 55 UsageEnvironment* env; 56 ByteStreamMemoryBufferSource* videoSource; 57 FramedSource* inputSource; 58 RTPSink* videoSink; 59 Boolean reuseFirstSource = true;//从内存中读取同一个数据源 60 61 62 #ifdef NOCONTROL 63 int main(int argc, char** argv) { 64 65 init(); 66 67 return 0; 68 } 69 #endif 70 71 72 static void announceStream(RTSPServer* rtspServer, ServerMediaSession* sms, 73 char const* streamName, char const* inputFileName) { 74 char* url = rtspServer->rtspURL(sms); 75 UsageEnvironment& env = rtspServer->envir(); 76 77 env << "Play this stream using the URL \"" << url << "\"\n"; 78 delete[] url; 79 } 80 81 /////////////////////////////////////////////////////////////////////////////////////////////////////// 82 /////////////////////////////////////////////////////////////////////////////////////////////////////// 83 84 85 86 87 int init() { 88 89 cam = (struct camera *) malloc(sizeof(struct camera)); 90 91 if (!cam) { 92 93 printf("malloc camera failure!\n"); 94 95 exit(1); 96 97 } 98 99 cam->device_name = (char*)CAM_NAME; 100 101 cam->buffers = NULL; 102 103 cam->width = PIC_WIDTH; 104 105 cam->height = PIC_HEIGHT; 106 107 framelength = YUV_FRAME_SIZE; 108 109 v4l2_init(cam); 110 // 初始化缓冲区 111 { 112 initBuff(Buff); 113 114 initBuff1(tmp); 115 116 pipe(pipefd);//创建管道用于live555发送进程与编码进程的通信测试 117 } 118 119 thread_create();//创建线程 120 121 thread_wait();//等待线程结束 122 123 v4l2_close(cam);//关闭采集器和编码器 124 125 return 0; 126 127 } 128 129 static void initBuff(struct cam_data *c) { 130 flag[0] = flag[1] = 0; 131 132 c = (struct cam_data *) malloc(sizeof(struct cam_data)); 133 134 if (!c) { 135 printf("malloc cam_data *c failure!\n"); 136 exit(1); 137 } 138 pthread_mutex_init(&c->lock, NULL); 139 140 pthread_cond_init(&c->captureOK, NULL); 141 142 pthread_cond_init(&c->encodeOK, NULL); 143 144 c->rpos = 0; 145 146 c->wpos = 0; 147 148 } 149 150 static void initBuff1(struct frame_O *c) { 151 152 c = (struct frame_O *) malloc(sizeof(struct frame_O*)); 153 if (!c) { 154 printf("malloc frame_O *c failure!\n"); 155 exit(1); 156 } 157 c->size=0; 158 159 pthread_rwlock_init(&c->rwlock, NULL); 160 161 } 162 163 164 void *video_Capture_Thread(void*) { 165 166 int i = 0; 167 168 int len = framelength; 169 170 struct timeval now; 171 172 struct timespec outtime; 173 174 while (1) { 175 176 usleep(DelayTime); 177 178 gettimeofday(&now, NULL); 179 180 outtime.tv_sec = now.tv_sec; 181 182 outtime.tv_nsec = DelayTime * 1000; 183 184 pthread_mutex_lock(&(Buff[i].lock)); 185 186 while ((Buff[i].wpos + len) % BUF_SIZE == Buff[i].rpos && Buff[i].rpos 187 != 0) { 188 189 pthread_cond_timedwait(&(Buff[i].encodeOK), &(Buff[i].lock), 190 &outtime); 191 } 192 193 if (buffOneFrame(&Buff[i], cam)) { 194 195 pthread_cond_signal(&(Buff[i].captureOK)); 196 197 pthread_mutex_unlock(&(Buff[i].lock)); 198 199 flag[i] = 1; 200 201 Buff[i].rpos = 0; 202 203 i = !i; 204 205 Buff[i].wpos = 0; 206 207 flag[i] = 0; 208 209 } 210 211 pthread_cond_signal(&(Buff[i].captureOK)); 212 213 pthread_mutex_unlock(&(Buff[i].lock)); 214 215 } 216 return 0; 217 } 218 219 void *video_Encode_Thread(void*) { 220 221 int i = -1; 222 223 while (1) 224 225 { 226 usleep(1); 227 228 if ((flag[1] == 0 && flag[0] == 0) || flag[i] == -1) 229 continue; 230 231 if (flag[0] == 1) 232 i = 0; 233 234 if (flag[1] == 1) 235 i = 1; 236 237 pthread_mutex_lock(&(Buff[i].lock)); 238 239 //编码一帧数据 240 encode_frame((Buff[i].cam_mbuf + Buff[i].rpos), pipefd ,tmp); 241 242 Buff[i].rpos += framelength; 243 244 if (Buff[i].rpos >= BUF_SIZE) { 245 Buff[i].rpos = 0; 246 Buff[!i].rpos = 0; 247 flag[i] = -1; 248 } 249 250 pthread_cond_signal(&(Buff[i].encodeOK)); 251 252 pthread_mutex_unlock(&(Buff[i].lock)); 253 254 } 255 return 0; 256 257 } 258 void *live555_trans_Thread(void*){ 259 260 while(tmp!=NULL){ 261 sleep(1); 262 263 // Begin by setting up our usage environment: 264 TaskScheduler* scheduler = BasicTaskScheduler::createNew(); 265 env = BasicUsageEnvironment::createNew(*scheduler); 266 267 UserAuthenticationDatabase* authDB = NULL; 268 #ifdef ACCESS_CONTROL 269 // To implement client access control to the RTSP server, do the following: 270 authDB = new UserAuthenticationDatabase; 271 authDB->addUserRecord("ls", "123"); // replace these with real strings 272 // Repeat the above with each <username>, <password> that you wish to allow 273 // access to the server. 274 #endif 275 276 // Create the RTSP server: 277 RTSPServer* rtspServer = RTSPServer::createNew(*env, 8554, authDB); 278 if (rtspServer == NULL) { 279 *env << "Failed to create RTSP server: " << env->getResultMsg() << "\n"; 280 exit(1); 281 } 282 283 char const* descriptionString 284 = "Session streamed by \"testOnDemandRTSPServer\""; 285 286 287 // A H.264 video elementary stream: 288 { 289 char const* streamName = "live"; 290 char const* inputFileName = "01.264"; 291 ServerMediaSession* sms 292 = ServerMediaSession::createNew(*env, streamName, streamName, 293 descriptionString); 294 295 sms->addSubsession(H264LiveVideoServerMediaSubsession//使用我们自己的会话类 296 ::createNew(*env, reuseFirstSource, (struct frame_I *)tmp ,pipefd)); 297 298 rtspServer->addServerMediaSession(sms); 299 300 announceStream(rtspServer, sms, streamName, inputFileName); 301 } 302 303 304 break; 305 } 306 307 //6/开启 doEventLoop ; 308 env->taskScheduler().doEventLoop(); // does not return 309 310 311 return 0; 312 } 313 314 315 void thread_create(void) { 316 317 int temp; 318 319 memset(&thread, 0, sizeof(thread)); 320 321 if ((temp = pthread_create(&thread[0], NULL, &video_Capture_Thread, NULL)) 322 != 0) 323 324 printf("video_Capture_Thread create fail!\n"); 325 326 if ((temp = pthread_create(&thread[1], NULL, &video_Encode_Thread, NULL)) 327 != 0) 328 329 printf("video_Encode_Thread create fail!\n"); 330 331 if ((temp = pthread_create(&thread[2], NULL, &live555_trans_Thread, NULL)) 332 != 0) 333 334 printf("live555_trans_Thread create fail!\n"); 335 336 } 337 338 void thread_wait(void) { 339 if (thread[0] != 0) { 340 341 pthread_join(thread[0], NULL); 342 343 } 344 if (thread[1] != 0) { 345 346 pthread_join(thread[1], NULL); 347 348 } 349 if (thread[2] != 0) { 350 351 pthread_join(thread[2], NULL); 352 353 } 354 355 }
未完待续……
posted on 2013-11-26 20:49 Time Kiler 阅读(3312) 评论(0) 编辑 收藏 举报