代码改变世界

ROS OpenCV

2022-04-05 15:41  jym蒟蒻  阅读(178)  评论(0编辑  收藏  举报

视觉节点测试

先进行一些测试。并记录数据。

圆的是节点,方的是话题。

1.robot_camera.launch

@robot:~$ roslaunch robot_vision robot_camera.launch
... logging to /home/jym/.ros/log/bff715b6-9201-11ec-b271-ac8247315e93/roslaunch-robot-8830.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://192.168.0.110:39005/

SUMMARY
========

PARAMETERS
 * /rosdistro: melodic
 * /rosversion: 1.14.10
 * /uvc_camera/camera_info_url: file:///home/jym...
 * /uvc_camera/device: /dev/video0
 * /uvc_camera/fps: 30
 * /uvc_camera/frame_id: /base_camera_link
 * /uvc_camera/image_height: 480
 * /uvc_camera/image_width: 640
 * /uvc_camera/io_method: mmap
 * /uvc_camera/pixel_format: jpeg

NODES
  /
    uvc_camera (uvc_camera/uvc_camera_node)

auto-starting new master
process[master]: started with pid [8840]
ROS_MASTER_URI=http://192.168.0.110:11311

setting /run_id to bff715b6-9201-11ec-b271-ac8247315e93
process[rosout-1]: started with pid [8852]
started core service [/rosout]
process[uvc_camera-2]: started with pid [8856]
[ INFO] [1645329679.219295497]: camera calibration URL: file:///home/jym/catkin_ws/src/robot_vision/config/astrapro.yaml
opening /dev/video0
pixfmt 0 = 'MJPG' desc = 'Motion-JPEG'
  discrete: 640x480:   1/30 
  discrete: 1920x1080:   1/30 
  discrete: 1280x720:   1/30 
  discrete: 320x240:   1/30 
  discrete: 640x480:   1/30 
pixfmt 1 = 'YUYV' desc = 'YUYV 4:2:2'
  discrete: 640x480:   1/30 
  discrete: 1280x720:   1/10 
  discrete: 320x240:   1/30 
  discrete: 640x480:   1/30 
  int (Brightness, 0, id = 980900): -64 to 64 (1)
  int (Contrast, 0, id = 980901): 0 to 64 (1)
  int (Saturation, 0, id = 980902): 0 to 128 (1)
  int (Hue, 0, id = 980903): -40 to 40 (1)
  bool (White Balance Temperature, Auto, 0, id = 98090c): 0 to 1 (1)
  int (Gamma, 0, id = 980910): 72 to 500 (1)
  int (Gain, 0, id = 980913): 0 to 100 (1)
  menu (Power Line Frequency, 0, id = 980918): 0 to 2 (1)
    0: Disabled
    1: 50 Hz
    2: 60 Hz
  int (White Balance Temperature, 16, id = 98091a): 2800 to 6500 (1)
  int (Sharpness, 0, id = 98091b): 0 to 6 (1)
  int (Backlight Compensation, 0, id = 98091c): 0 to 2 (1)
  menu (Exposure, Auto, 0, id = 9a0901): 0 to 3 (1)
  int (Exposure (Absolute), 16, id = 9a0902): 1 to 5000 (1)
  bool (Exposure, Auto Priority, 0, id = 9a0903): 0 to 1 (1)

@robot:~$ rostopic list
/camera_info
/image_raw
/image_raw/compressed
/image_raw/compressed/parameter_descriptions
/image_raw/compressed/parameter_updates
/image_raw/compressedDepth
/image_raw/compressedDepth/parameter_descriptions
/image_raw/compressedDepth/parameter_updates
/image_raw/theora
/image_raw/theora/parameter_descriptions
/image_raw/theora/parameter_updates
/rosout
/rosout_agg
@robot:~$ rostopic info /camera_info
Type: sensor_msgs/CameraInfo

Publishers: 
 * /uvc_camera (http://192.168.0.110:43741/)

Subscribers: None


@robot:~$ rostopic echo /camera_info
header: 
  seq: 8269
  stamp: 
    secs: 1645330011
    nsecs: 812194467
  frame_id: "/base_camera_link"
height: 480
width: 640
distortion_model: "plumb_bob"
D: [0.155778, -0.139286, 0.023517, 0.023536, 0.0]
K: [588.973899, 0.0, 340.56508, 0.0, 588.74004, 276.946637, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [616.194702, 0.0, 352.115424, 0.0, 0.0, 618.615906, 285.316735, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi: 
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: False
---
header: 
  seq: 8270
  stamp: 
    secs: 1645330011
    nsecs: 851798254
  frame_id: "/base_camera_link"
height: 480
width: 640
distortion_model: "plumb_bob"
D: [0.155778, -0.139286, 0.023517, 0.023536, 0.0]
K: [588.973899, 0.0, 340.56508, 0.0, 588.74004, 276.946637, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [616.194702, 0.0, 352.115424, 0.0, 0.0, 618.615906, 285.316735, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi: 
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: False
---

在这里插入图片描述

2.face_detection.launch

@robot:~$ roslaunch robot_vision face_recognition.launch
... logging to /home/jym/.ros/log/1a209a74-9204-11ec-9c2b-ac8247315e93/roslaunch-robot-16110.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://192.168.0.110:35649/

SUMMARY
========

PARAMETERS
 * /face_detection/debug_view: False
 * /face_detection/eyes_cascade_name: /usr/share/opencv...
 * /face_detection/face_cascade_name: /usr/share/opencv...
 * /face_detection/queue_size: 3
 * /face_detection/use_camera_info: False
 * /face_recognition/data_dir: /home/jym/catk...
 * /face_recognition/queue_size: 100
 * /rosdistro: melodic
 * /rosversion: 1.14.10

NODES
  /
    face_detection (opencv_apps/face_detection)
    face_recognition (opencv_apps/face_recognition)

ROS_MASTER_URI=http://192.168.0.110:11311

process[face_detection-1]: started with pid [16141]
process[face_recognition-2]: started with pid [16142]
[ INFO] [1645330916.918482639]: Initializing nodelet with 1 worker threads.
[ INFO] [1645330917.362862805]: training with 1 images

在这里插入图片描述

3,face_recognition_train.launch

@robot:~$ roslaunch robot_vision face_recognition_train.launch
... logging to /home/jym/.ros/log/1a209a74-9204-11ec-9c2b-ac8247315e93/roslaunch-robot-16483.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://192.168.0.110:42635/

SUMMARY
========

PARAMETERS
 * /rosdistro: melodic
 * /rosversion: 1.14.10

NODES
  /
    face_recognition_trainer (opencv_apps/face_recognition_trainer.py)

ROS_MASTER_URI=http://192.168.0.110:11311

process[face_recognition_trainer-1]: started with pid [16519]
Please input your name and press Enter: joys 
Your name is joys. Correct? [y/n]: y
Please stand at the center of the camera and press Enter: 
taking picture...
One more picture? [y/n]: n
sending to trainer...
OK. Trained successfully!
[face_recognition_trainer-1] process has finished cleanly
log file: /home/jym/.ros/log/1a209a74-9204-11ec-9c2b-ac8247315e93/face_recognition_trainer-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done

在这里插入图片描述

4.hough_lines.launch

@robot:~$ roslaunch robot_vision hough_lines.launch
... logging to /home/jym/.ros/log/bab0a44c-9205-11ec-b79c-ac8247315e93/roslaunch-robot-19161.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://192.168.0.110:33599/

SUMMARY
========

PARAMETERS
 * /hough_lines/debug_view: False
 * /hough_lines/hough_type: 0
 * /hough_lines/maxLineGrap: 10
 * /hough_lines/minLineLength: 30
 * /hough_lines/queue_size: 3
 * /hough_lines/rho: 1.0
 * /hough_lines/theta: 1.0
 * /hough_lines/threshold: 125
 * /hough_lines/use_camera_info: False
 * /rosdistro: melodic
 * /rosversion: 1.14.10

NODES
  /
    hough_lines (opencv_apps/hough_lines)

ROS_MASTER_URI=http://192.168.0.110:11311

process[hough_lines-1]: started with pid [19199]
[ WARN] [1645331407.509654448]: '/hough_lines' subscribes topics only with child subscribers.

在这里插入图片描述

文件夹分析

在这里插入图片描述

config,data,launch,scripts 四个文件夹。

config文件夹

config:存放配置相关的信息。

比如有一个叫做AR_track_camera.rviz的文件,机器人端启动AR标签跟踪,pc端启动RVIZ,然后载入这个配置文件(file->open config),就能在RVIZ里面看到二维码的位置和远近距离。

如果不导入这个配置文件,打开RVIZ,默认是下面这样。

在这里插入图片描述

现在来看一下此时的RVIZ配置文件的路径。可知,现在的rviz配置文件是一个default的。

在这里插入图片描述

如果载入我们自己写的配置文件,就会出现下面这样。也就是说,这个rviz配置文件影响了rviz工具。

在这里插入图片描述

config文件夹下面还有line_hsv.cfg动态调参的配置文件。

用于说明可调的参数是那些,默认值是什么。

#!/usr/bin/env python
PACKAGE = "robot_vision"
from dynamic_reconfigure.parameter_generator_catkin import *

gen = ParameterGenerator()

gen.add("h_lower", int_t, 0, "HSV color space h_low", 0, 0, 255)
gen.add("s_lower", int_t, 0, "HSV color space s_low", 0, 0, 255)
gen.add("v_lower", int_t, 0, "HSV color space v_low", 0, 0, 255)
gen.add("h_upper", int_t, 0, "HSV color space h_high", 255, 0, 255)
gen.add("s_upper", int_t, 0, "HSV color space s_high", 255, 0, 255)
gen.add("v_upper", int_t, 0, "HSV color space v_high", 255, 0, 255)

exit(gen.generate(PACKAGE, "robot_vision", "line_hsv"))

其他文件:

data:存放存人脸分类器。

CMakeLists.txt:

cmake_minimum_required(VERSION 2.8.3)
project(robot_vision)

#视觉处理有用到动态调参的功能
find_package(catkin REQUIRED
    dynamic_reconfigure#引用动态调参功能包
)
generate_dynamic_reconfigure_options(#引入参数配置文件
#   cfg/DynReconf1.cfg
  config/line_hsv.cfg
)
清除之前编译的内容,也可以删除工作空间下的build和devel文件实现相同效果
jym@ubuntu:~$ cd ~/catkin_ws
jym@ubuntu:~/catkin_ws$ catkin_make clean
Base path: /home/jym/catkin_ws
Source space: /home/jym/catkin_ws/src
Build space: /home/jym/catkin_ws/build
Devel space: /home/jym/catkin_ws/devel
Install space: /home/jym/catkin_ws/install

可以看到在devel里面生成line_hsvConfig.py文件,CMakeLists.txt里面添加的动态调参功能包和引入参数配置文件,目的就是在catkin_make时候生成必要的依赖文件。包括python和c++的接口。

[  2%] Generating dynamic reconfigure files from config/line_hsv.cfg: /home/jym/catkin_ws/devel/include/robot_vision/line_hsvConfig.h /home/jym/catkin_ws/devel/lib/python2.7/dist-packages/robot_vision/cfg/line_hsvConfig.py

package.xml

<?xml version="1.0"?>
<package format="2"><!-- package标签的版本 -->

  <buildtool_depend>catkin</buildtool_depend>
  <build_depend>dynamic_reconfigure</build_depend><!-- 增加动态调参的编译依赖项 -->
  <exec_depend>rospy</exec_depend>
  <exec_depend>sensor_msgs</exec_depend>
  <exec_depend>cv_bridge</exec_depend>
  <exec_depend>opencv_apps</exec_depend><!-- format2使用exec_depend标签 -->
  <exec_depend>uvc_camera</exec_depend>
</package>
jym@ubuntu:~/catkin_ws/src$ rosdep check --from-path robot_vision/
All system dependencies have been satisfied

    
如果缺少:
rosdep install --from-path robot_vision/

launch文件夹

opencv功能包。

可以修改launch文件里面的default默认值,使其符合机器人实际发布的话题名称、

在这里插入图片描述

其中一个launch文件的一部分。

 <!-- hough_lines.cpp -->
  <node name="$(arg node_name)" pkg="opencv_apps" type="hough_lines" >
@ubuntu:~$ roscd opencv_apps/
@ubuntu:/opt/ros/melodic/share/opencv_apps$ ls
cmake  launch  msg  nodelet_plugins.xml  package.xml  scripts  srv  test
@ubuntu:/opt/ros/melodic/share/opencv_apps$ ls scripts/
face_recognition_trainer.py

经过上面这个代码,可知,opencv功能包只有一个人脸训练脚本,因为apt 方式安装的是二进制的可执行文件。如果要修改opencv的源码文件,可以看下面几个网站:

http://wiki.ros.org/opencv_apps

https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/sobel_derivatives/sobel_derivatives.html#sobel-derivatives

https://github.com/opencv/opencv/tree/2.4

二进制包和源码之间的切换:

sudo apt remove ros-melodic-opencv-apps
把apt安装的包删了.
再通过roscd验证是否已删除
roscd opencv-apps/

然后进到catkin_ws/src目录。
git clone https://github.com/ros-perception/opencv_apps.git

然后清除之前编译的内容—删除工作空间下的build和devel文件。

然后catkin_make

限制编译的进程数:j几就是几线程
catkin_make -j1
单线程编译好处是占用较少资源,缺点是编译时间长。

roscd opencv_apps/
ls

ar_track.launch:AR标签跟踪

robot: roslaunch robot_vision robot_camera.launch
robot: roslaunch robot_vision ar_track.launch
PC :rviz+导入配置文件
pc :rostopic echo /ar_pose_marker
输出每个二维码对应的序号,以及相对于相机原点的距离信息和四元数表示的航向姿态

关于PC端运行ar_track.launch,出现卡顿现象:
关掉robot ar_track.launch
PC: roslaunch robot_vision ar_track.launch
PC :rviz+导入配置文件
画面有明显的卡顿现象:
AR 标签跟踪需要订阅 image_raw话题。
PC端运行ar_track时先将 image_raw的内容传输到 PC 端。
image_raw占用带宽高,所以容易数据丢失,卡顿。
<launch>
    <!-- support multi robot -->
    <!-- <arg name="robot_name" default=""/>   
    <include file="$(find robot_vision)/launch/robot_camera.launch" >
            <arg name="robot_name"            value="$(arg robot_name)"/>
    </include> -->
    <arg name="camera_frame" default="base_camera_link"/> 
	<!-- 通过launch文件启动静态坐标转换节点 -->
    <!-- 前面是xyz,后面是欧拉角表示的角度变换 -->
    <node pkg="tf" type="static_transform_publisher" name="world_to_cam" 
          args="0 0 0.5 1.57 3.14 1.57 world $(arg camera_frame) 10" />
        
    <arg name="marker_size" default="5" />
    <arg name="max_new_marker_error" default="0.08" />
    <arg name="max_track_error" default="0.2" />
    <arg name="cam_image_topic" default="/image_raw" />
    <arg name="cam_info_topic" default="/camera_info" />
    <arg name="output_frame" default="/$(arg camera_frame)" />
        
    <!-- ar_track_alvar也是第三方发布的功能包 -->
    <node name="ar_track_alvar" pkg="ar_track_alvar" type="individualMarkersNoKinect" respawn="false" output="screen">
        <param name="marker_size"           type="double" value="$(arg marker_size)" />
        <param name="max_new_marker_error"  type="double" value="$(arg max_new_marker_error)" />
        <param name="max_track_error"       type="double" value="$(arg max_track_error)" />
        <param name="output_frame"          type="string" value="$(arg output_frame)" />
        
		<!-- remap 标签对话题做一个重命名 -->
		<!-- 当使用的话题名与功能包用到的话题名不符时需要 remap 标签进行转换 -->
        <!-- 把它需要的话题名转换成我们真正要发布的话题名 -->
        <remap from="camera_image"  to="$(arg cam_image_topic)" />
        <remap from="camera_info"   to="$(arg cam_info_topic)" />
    </node>

    <!-- rviz view /-->
    <!--node pkg="rviz" type="rviz" name="rviz" args="-d $(find robot_vision)/config/ar_track_camera.rviz"/-->

</launch>

fake_camera.launch:图像叠加要用到。

robot: roslaunch robot_vision robot_camera.launch
robot: roslaunch robot_vision fake_camera.launch
PC: roslaunch robot_vision adding_images.launch
pc: 两个 rqt_image_view,订阅/image_static 和/adding_images/image/compressed
<launch>
    <node pkg="robot_vision" name="fake_camera" type="fake_camera.py" output="screen">
        <!-- 节点根据传入的文件路径参数读取一个文件,位于data文件夹下的一张图片 -->
        <param name="image_path" value="$(find robot_vision)/data/jym.jpeg" />
    </node>
</launch>

line_follow.launch:机器视觉循迹用到。

robot: roslaunch robot_vision robot_camera.launch
robot: roslaunch robot_vision line_follow.launch
PC:rqt_image_view 
订阅/image_raw/compressed 正常图像
订阅/result_image/compressed 机器人定位线的中心点
订阅/mask_image/compressed 图像黑白,白色是蓝线的位置
robot:roslaunch base_control base_control.launch 循迹

调整线的颜色:
robot: roslaunch robot_vision robot_camera.launch
robot: roslaunch robot_vision line_follow.launch test_mode:=true
pc:rqt_image_view 
订阅/mask_image/compressed 图像黑白,白色是蓝线的位置
订阅/result_image/compressed 红色准心对准图像中心
test_mode 参数:断输出红色准心处的 HSV 色彩空间值。
移动机器人位置,准心对准红色线,信息变成红色线的 HSV 值。
PC:rosrun rqt_reconfigure rqt_reconfigure 动态调参工具
修改hsv值。调参工具有hsv上限和下限,可以:
将上限严格限制在检测到的参数范围。
将下限适当放宽。
以上调参仅在本次应用生效。

持续生效解决方法:
robot:
roscd robot_vision/launch
vim line_follow.launch
    修改里面hsv的default值。
<launch>

    <arg name="h_lower" default="0"/>  
    <arg name="s_lower" default="125"/>  
    <arg name="v_lower" default="125"/> 

    <arg name="h_upper" default="30"/>  
    <arg name="s_upper" default="255"/>  
    <arg name="v_upper" default="200"/>  
    <arg name="test_mode" default="False"/>   <!-- node work in test mode or normal mode -->

    <node pkg="robot_vision" name="linefollow" type="line_detector.py" output="screen">
        <param name="h_lower"            value="$(arg h_lower)"/>
        <param name="s_lower"            value="$(arg s_lower)"/>
        <param name="v_lower"            value="$(arg v_lower)"/>
        <param name="h_upper"            value="$(arg h_upper)"/>
        <param name="s_upper"            value="$(arg s_upper)"/>
        <param name="v_upper"            value="$(arg v_upper)"/>
        <param name="test_mode"            value="$(arg test_mode)"/>
        
    </node>
    
    <!-- 将 raw 格式的图像话题转换成 compressed 的话题以节省带宽 -->
    <node name="republish_mask" pkg="image_transport" type="republish" args="raw in:=mask_image compressed out:=mask_image" />
    <node name="republish_result" pkg="image_transport" type="republish" args="raw in:=result_image compressed out:=result_image" />
    

</launch>

robot_camera.launch

<launch>
  <!-- support multi robot -->
  <!--传入参数-->
  <arg name="robot_name" default=""/> 
  <arg name="base_type"       default="$(env BASE_TYPE)" /> 
  <arg name="camera_type"       default="$(env CAMERA_TYPE)" />  
  <arg name="camera_tf"       default="false" />    
  <arg name="base_frame"       default="base_footprint" />    
  <arg name="camera_frame" default="base_camera_link"/>  
  <arg name="device" default="video0"/>  
  <arg name="base_id"    value="$(arg robot_name)/$(arg base_frame)"/>        <!-- base_link name -->
  <arg name="camera_id"    value="$(arg robot_name)/$(arg camera_frame)"/>  <!-- camera link name -->
  <!--NanoRobot camera image need flip-->
  <!--使用了两个 group-->
  <!--根据 robot_name 是否为空判断使用哪个 group-->  
  <group if="$(eval robot_name == '')">
    <group if="$(eval base_type == 'NanoRobot')">
        <node name="uvc_camera" pkg="uvc_camera" type="uvc_camera_node" output="screen" >
        <param name="device" value="/dev/$(arg device)" />
        <param name="fps" value="30" />
        <param name="image_width" value="640" />
        <param name="image_height" value="480" />
        <param name="pixel_format" value="jpeg" />
        <param name="frame_id" value="$(arg camera_id)" />
        <param name="io_method" value="mmap"/>
        <param name="horizontal_flip" value="true" />
        <param name="vertical_flip" value="true" />  
        <param name="camera_info_url" type="string" value="file://$(find robot_vision)/config/$(arg camera_type).yaml"/>
        </node>
    </group>

    <group unless="$(eval base_type == 'NanoRobot')">
        <node name="uvc_camera" pkg="uvc_camera" type="uvc_camera_node" output="screen" >
        <param name="device" value="/dev/$(arg device)" />
        <param name="fps" value="30" />
        <param name="image_width" value="640" />
        <param name="image_height" value="480" />
        <param name="pixel_format" value="jpeg" />
        <param name="frame_id" value="$(arg camera_id)" />
        <param name="io_method" value="mmap"/>
        <param name="camera_info_url" type="string" value="file://$(find robot_vision)/config/$(arg camera_type).yaml"/>
        </node>
    </group>
    <!--Pub camera tf info-->
    <group if="$(arg camera_tf)">
        <group if="$(eval base_type == 'NanoRobot')">
            <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"
                args="0.02225 0.0 0.10 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20">
            </node>
        </group>
        <group if="$(eval base_type == '4WD')">
            <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"
                args="0.1 0.02 0.25 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20">
            </node>
        </group>
        <group if="$(eval base_type == '4WD_OMNI')">
            <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"
                args="0.1 0.02 0.25 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20">
            </node>
        </group>
        <group if="$(eval base_type == 'NanoCar')">
            <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"
                args="0.1024 0.0 0.1 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20">
            </node>
        </group>
        <group if="$(eval base_type == 'NanoRobot_Pro')">
            <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"
                args="0.0015 0.023 0.140 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20">
            </node>
        </group>
        <group if="$(eval base_type == 'NanoCar_Pro')">
            <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"
                args="0.140 0.023 0.093 0 0.0 0.0 $(arg base_id) $(arg camera_id) 20">
            </node>
        </group>
    </group> 
  </group>

  <!--robot_name 非空,加上命名空间执行第二个 group-->
  <group unless="$(eval robot_name == '')">
    <group ns="$(arg robot_name)">
        <!--NanoRobot camera image need flip-->
        <!--根据机器人底盘类型做不同的处理-->
        <group if="$(eval base_type == 'NanoRobot')">
            <node name="uvc_camera" pkg="uvc_camera" type="uvc_camera_node" output="screen" >
            <param name="device" value="/dev/$(arg device)" />
            <param name="fps" value="30" />
            <param name="image_width" value="640" />
            <param name="image_height" value="480" />
            <param name="pixel_format" value="jpeg" />
            <param name="frame_id" value="$(arg camera_id)" />
            <param name="io_method" value="mmap"/>
            <param name="horizontal_flip" value="true" /><!--NanoRobot类型的摄像头反装,将horizontal_flip设置为true-->
            <param name="vertical_flip" value="true" />  
            </node>
        </group>
        <group unless="$(eval base_type == 'NanoRobot')">
            <node name="uvc_camera" pkg="uvc_camera" type="uvc_camera_node" output="screen" >
            <param name="device" value="/dev/$(arg device)" />
            <param name="fps" value="30" />
            <param name="image_width" value="640" />
            <param name="image_height" value="480" />
            <param name="pixel_format" value="jpeg" />
            <param name="frame_id" value="$(arg camera_id)" />
            <param name="io_method" value="mmap"/>
            </node>
        </group>
        <!--Pub camera tf info,根据不同底盘类型做静态坐标转换-->
        <group if="$(arg camera_tf)">
            <group if="$(eval base_type == 'NanoRobot')">
                <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"
                    args="0.02225 0.0 0.10 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20">
                </node>
            </group>
            <group if="$(eval base_type == '4WD')">
                <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"
                    args="0.1 0.02 0.25 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20">
                </node>
            </group>
            <group if="$(eval base_type == '4WD_OMNI')">
                <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"
                    args="0.1 0.02 0.25 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20">
                </node>
            </group>
            <group if="$(eval base_type == 'NanoCar')">
                <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"
                    args="0.1024 0.0 0.1 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20">
                </node>
            </group>
            <group if="$(eval base_type == 'NanoRobot_Pro')">
                <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"
                    args="0.0015 0.023 0.140 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20">
                </node>
            </group>
            <group if="$(eval base_type == 'NanoCar_Pro')">
                <node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"
                    args="0.140 0.023 0.093 0 0.0 0.0 $(arg base_id) $(arg camera_id) 20">
                </node>
            </group>
        </group> 
    </group>
  </group>


</launch>

script文件夹

cv_bridge_test:实现 OpenCV中的数据和 ROS 相互转换。

robot:roslaunch robot_vision robot_camera.launch
pc:rosrun robot_vision cv_bridge_test.py
pc:rqt_image_view
    订阅:/image_raw/compressed 
    订阅:/cv_bridge_image,出现圆点

在这里插入图片描述

cv_bridge_test 节点订阅 image_raw 话题,将 image_raw 话题转换成 OpenCV 数据,然后在图像上画点。最后转换成 ROS 话题发布。

在这里插入图片描述

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import rospy
import cv2
from cv_bridge import CvBridge, CvBridgeError
from sensor_msgs.msg import Image

class image_converter:
    def __init__(self):    
        # 创建cv_bridge,声明图像的发布者和订阅者
        self.image_pub = rospy.Publisher("cv_bridge_image", Image, queue_size=1)
        self.bridge = CvBridge()
        #节点首先订阅图像
        self.image_sub = rospy.Subscriber("/image_raw", Image, self.callback)

    def callback(self,data):
        # 使用cv_bridge将ROS的图像数据转换成OpenCV的图像格式
        try:
            #使用 imgmsg_to_cv2 函数将订阅到的图像话题转换为OpenCV识别的图像格式
            cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8")
        except CvBridgeError as e:
            print e

        # 在opencv的显示窗口中绘制一个圆,作为标记
        (rows,cols,channels) = cv_image.shape
        if cols > 60 and rows > 60 :
            cv2.circle(cv_image, (60, 60), 30, (0,0,255), -1)

        # 显示Opencv格式的图像
        # cv2.imshow("Image window", cv_image)
        # cv2.waitKey(3)

        # 再将opencv格式额数据转换成ros image格式的数据发布
        try:
            #通过cv2_to_imgmsg函数转换成ros下的话题格式发布出去
            img_msg = self.bridge.cv2_to_imgmsg(cv_image, "bgr8")
            img_msg.header.stamp = rospy.Time.now()
            self.image_pub.publish(img_msg)
        except CvBridgeError as e:
            print e

if __name__ == '__main__':
    try:
        # 初始化ros节点
        rospy.init_node("cv_bridge_test")
        rospy.loginfo("Starting cv_bridge_test node")
        image_converter()
        rospy.spin()
    except KeyboardInterrupt:
        print "Shutting down cv_bridge_test node."
        cv2.destroyAllWindows()

fake_detector.py:用于图像叠加。

#!/usr/bin/env python

import rospy
import cv2
from std_msgs.msg import String
from cv_bridge import CvBridge, CvBridgeError
from sensor_msgs.msg import Image
class fake_camera:
    def __init__(self):
        self.image_path = rospy.get_param('~image_path','jym.jpeg')
        self.image_pub = rospy.Publisher("/image_static", Image, queue_size=3)
        self.bridge = CvBridge() 
        self.pub_image()
    def pub_image(self):
        
        self.rate = rospy.Rate(30)# 发布频率为30hz
        print self.image_path
        self.cv_image = cv2.imread(self.image_path,1)#用opencv读取将要被叠加的图片
        rospy.loginfo("Start Publish Fake Camera Image")
        while not rospy.is_shutdown():
            img_msg = self.bridge.cv2_to_imgmsg(self.cv_image, "bgr8")#转换成ros格式,定期发布
            img_msg.header.stamp = rospy.Time.now()
            self.image_pub.publish(img_msg)#发布
            self.rate.sleep()#rate是设置的频率

if __name__ == '__main__':
    try:
        rospy.init_node("fake_camera",anonymous=False)
        fake_camera()
    except rospy.ROSInternalException:
        pass

line_detector.py:用于循迹

#!/usr/bin/env python

import rospy
import cv2
from cv_bridge import CvBridge, CvBridgeError
from sensor_msgs.msg import Image
import numpy as np
from dynamic_reconfigure.server import Server
from robot_vision.cfg import line_hsvConfig
# from robot_vision.cfg

from geometry_msgs.msg import Twist

class line_follow:
    def __init__(self):    
        #define topic publisher and subscriber
        #订阅器、发布器、CvBridge类定义
        self.bridge = CvBridge()
        self.image_sub = rospy.Subscriber("/image_raw", Image, self.callback)
        self.mask_pub = rospy.Publisher("/mask_image", Image, queue_size=1)
        self.result_pub = rospy.Publisher("/result_image", Image, queue_size=1)
        self.pub_cmd = rospy.Publisher('cmd_vel', Twist, queue_size=5)
        self.srv = Server(line_hsvConfig,self.dynamic_reconfigure_callback)
        # get param from launch file 
        self.test_mode = bool(rospy.get_param('~test_mode',False))
        self.h_lower = int(rospy.get_param('~h_lower',110))
        self.s_lower = int(rospy.get_param('~s_lower',50))
        self.v_lower = int(rospy.get_param('~v_lower',50))

        self.h_upper = int(rospy.get_param('~h_upper',130))
        self.s_upper = int(rospy.get_param('~s_upper',255))
        self.v_upper = int(rospy.get_param('~v_upper',255))
        #line center point X Axis coordinate
        self.center_point = 0

    #通过rqt_reconfigure 工具修改变量后就会调用下面的回调函数
    def dynamic_reconfigure_callback(self,config,level):#动态调参的回调函数
        # update config param,将rqt_reconfigure 工具修改的参数赋给程序中的变量
        self.h_lower = config.h_lower
        self.s_lower = config.s_lower
        self.v_lower = config.v_lower
        self.h_upper = config.h_upper
        self.s_upper = config.s_upper
        self.v_upper = config.v_upper
        return config

    #将图像的颜色,位置,中心点提取出来
    def callback(self,data):
        # convert ROS topic to CV image formart
        try:
            cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8")
        except CvBridgeError as e:
            print e
        # conver image color from RGB to HSV    
        hsv_image = cv2.cvtColor(cv_image,cv2.COLOR_RGB2HSV)
        #set color mask min amd max value
        line_lower = np.array([self.h_lower,self.s_lower,self.v_lower])
        line_upper = np.array([self.h_upper,self.s_upper,self.v_upper])
        # get mask from color
        mask = cv2.inRange(hsv_image,line_lower,line_upper)
        # close operation to fit some little hole
        kernel = np.ones((9,9),np.uint8)
        mask = cv2.morphologyEx(mask,cv2.MORPH_CLOSE,kernel)
        # if test mode,output the center point HSV value
        res = cv_image
        if self.test_mode:
            cv2.circle(res, (hsv_image.shape[1]/2,hsv_image.shape[0]/2), 5, (0,0,255), 1)
            cv2.line(res,(hsv_image.shape[1]/2-10, hsv_image.shape[0]/2), (hsv_image.shape[1]/2+10,hsv_image.shape[0]/2), (0,0,255), 1)
            cv2.line(res,(hsv_image.shape[1]/2, hsv_image.shape[0]/2-10), (hsv_image.shape[1]/2, hsv_image.shape[0]/2+10), (0,0,255), 1)
            rospy.loginfo("Point HSV Value is %s"%hsv_image[hsv_image.shape[0]/2,hsv_image.shape[1]/2])            
        else:
            # in normal mode,add mask to original image
            # res = cv2.bitwise_and(cv_image,cv_image,mask=mask)
            for i in range(-60,100,20):#提取图像中心点
                point = np.nonzero(mask[mask.shape[0]/2 + i])             
                if len(point[0]) > 10:
                    self.center_point = int(np.mean(point))
                    cv2.circle(res, (self.center_point,hsv_image.shape[0]/2+i), 5, (0,0,255), 5)
                    break
        if self.center_point:
            self.twist_calculate(hsv_image.shape[1]/2,self.center_point)
        self.center_point = 0


        # show CV image in debug mode(need display device)
        # cv2.imshow("Image window", res)
        # cv2.imshow("Mask window", mask)
        # cv2.waitKey(3)

        # convert CV image to ROS topic and pub 
        try:
            img_msg = self.bridge.cv2_to_imgmsg(res, encoding="bgr8")
            img_msg.header.stamp = rospy.Time.now()
            self.result_pub.publish(img_msg)
            img_msg = self.bridge.cv2_to_imgmsg(mask, encoding="passthrough")
            img_msg.header.stamp = rospy.Time.now()
            self.mask_pub.publish(img_msg)
            
        except CvBridgeError as e:
            print e
            
    #根据图像的中心的位置通过 twist_calculate 函数计算需要发布的速度指令的值
    def twist_calculate(self,width,center):
        center = float(center)
        self.twist = Twist()
        self.twist.linear.x = 0
        self.twist.linear.y = 0
        self.twist.linear.z = 0
        self.twist.angular.x = 0
        self.twist.angular.y = 0
        self.twist.angular.z = 0
        if center/width > 0.95 and center/width < 1.05:
            self.twist.linear.x = 0.2
        else:
            self.twist.angular.z = ((width - center) / width) / 2.0
            if abs(self.twist.angular.z) < 0.2:
                self.twist.linear.x = 0.2 - self.twist.angular.z/2.0
            else:
                self.twist.linear.x = 0.1
        self.pub_cmd.publish(self.twist)



if __name__ == '__main__':
    try:
        # init ROS node 
        rospy.init_node("line_follow")
        rospy.loginfo("Starting Line Follow node")
        line_follow()
        rospy.spin()
    except KeyboardInterrupt:
        print "Shutting down cv_bridge_test node."
        cv2.destroyAllWindows()