玩转OpenVINO之二:试运行mask_rcnn_demo
调试open_model_zoo/mask_rcnn_demo
接前面一篇,编译好了DEMO,我们继续玩转OpenVINO,试一下open_model_zoo中的模型。
假设你要调试open_model_zoo中的某个模型(注意,下面的命令中,使用你自己想用的模型,我一般是好几个同时转换,完了随机测试的)。
先是要完成Optimizer工作,转换模型得到IR文件,
命令如下
python mo_tf.py --input_model
E:/mask_rcnn_resnet50_atrous_coco_2018_01_28/frozen_inference_graph.pb
--tensorflow_use_custom_operations_config extensions/front/tf/mask_rcnn_support.json
--tensorflow_object_detection_api_pipeline_config E:/mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28/pipeline.config
喜欢用vscode调试的朋友可以看下面的launch.json文件,
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: 当前文件",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": false,
"args": [
"--input_model","E:\\mask_rcnn_resnet50_atrous_coco_2018_01_28\\frozen_inference_graph.pb",
"--tensorflow_use_custom_operations_config","D:/devOpenVino/openvino_2020.3.194/deployment_tools/model_optimizer/extensions/front/tf/mask_rcnn_support.json",
"--tensorflow_object_detection_api_pipeline_config","E:/mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28/pipeline.config"
]
}
]
}
如果你和我一样,没有指定输出文件名,转换完成后得到的都是frozen_inference_graph.bin和frozen_inference_graph.xml文件,要注意改成相应的模型文件名,否则多了就弄混了。
下面,我们开始用VS2019中的C++ Demo来测试这些模型。这些DEMO在我们上一讲《玩转OpenVINO_cpp samples的编译》中已经编译好了,现在拿来用。
添加路径一
如果你使用debug版本,那么环境变量path路径设置中添加
C:\IntelSWTools\openvino_2020.3.194\deployment_tools\inference_engine\bin\intel64\Debug
同时,把opencv_world430d.dll拷贝到该文件夹下面(不想拷贝的话,自己添加路径也行,反正就是让程序能找到这个dll文件)
如果是release版本,则添加
C:\IntelSWTools\openvino_2020.3.194\deployment_tools\inference_engine\bin\intel64\Release,
同时,把opencv_world430.dll拷贝到该文件夹下面
总的来说,这里有不少dll文件是intel ineference_engine要用到的。
添加路径二
还有一些路径也是必须添加的,
C:\IntelSWTools\openvino_2020.3.194\deployment_tools\inference_engine\external\tbb\bin
C:\IntelSWTools\openvino_2020.3.194\deployment_tools\ngraph\lib
调试运行
运行的项目名称是mask_rcnn_demo。
具体可参考:https://docs.openvinotoolkit.org/latest/_demos_mask_rcnn_demo_README.html
我把说明摘录一部分如下(注意:这里是linux下的格式,我后面说明中用到的是windows系统中的格式,在命令使用上有点小小的差异)
./mask_rcnn_demo -h
InferenceEngine:
API version ............ <version>
Build .................. <number>
mask_rcnn_demo [OPTION]
Options:
-h Print a usage message.
-i "<path>" Required. Path to a .bmp image.
-m "<path>" Required. Path to an .xml file with a trained model.
-l "<absolute_path>" Required for CPU custom layers. Absolute path to a shared library with the kernels implementations.
Or
-c "<absolute_path>" Required for GPU custom kernels. Absolute path to the .xml file with the kernels descriptions.
-d "<device>" Optional. Specify the target device to infer on (the list of available devices is shown below). Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin. The demo will look for a suitable plugin for a specified device (CPU by default)
-detection_output_name "<string>" Optional. The name of detection output layer. Default value is "reshape_do_2d"
-masks_name "<string>" Optional. The name of masks layer. Default value is "masks"
查看帮助文档: mask_rcnn_demo --h
C:\IntelSWTools\openvino_2020.3.194\deployment_tools\open_model_zoo\demos\dev\intel64\Debug>mask_rcnn_demo --h
InferenceEngine: 00007FFCC7C49BC8
mask_rcnn_demo [OPTION]
Options:
-h Print a usage message.
-i "<path>" Required. Path to a .bmp image.
-m "<path>" Required. Path to an .xml file with a trained model.
-l "<absolute_path>" Required for CPU custom layers. Absolute path to a shared library with the kernels implementations.
Or
-c "<absolute_path>" Required for GPU custom kernels. Absolute path to the .xml file with the kernels descriptions.
-d "<device>" Optional. Specify the target device to infer on (the list of available devices is shown below). Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin. The demo will look for a suitable plugin for a specified device (CPU by default)
-detection_output_name "<string>" Optional. The name of detection output layer. Default value is "reshape_do_2d"
-masks_name "<string>" Optional. The name of masks layer. Default value is "masks"
Available target devices: CPU GNA
这里图片必须是bmp格式。
如何输入图片地址呢?官方给出的命令如下,
./mask_rcnn_demo -i <path_to_image>/inputImage.bmp -m <path_to_model>/mask_rcnn_inception_resnet_v2_atrous_coco.xml
事实上,用命令行输入的方式, OpenVINO中由一个叫args_helper.hpp的文件来处理,其中一段的代码如下,
/**
* @brief This function find -i/--images key in input args
* It's necessary to process multiple values for single key
* @return files updated vector of verified input files
*/
inline void parseInputFilesArguments(std::vector<std::string> &files) {
std::vector<std::string> args = gflags::GetArgvs();
bool readArguments = false;
for (size_t i = 0; i < args.size(); i++) {
if (args.at(i) == "-i" || args.at(i) == "--images") {
readArguments = true;
continue;
}
if (!readArguments) {
continue;
}
if (args.at(i).c_str()[0] == '-') {
break;
}
readInputFilesArguments(files, args.at(i));
}
}
就是说,输入图片的格式为以下两者都可以,
-i xyz.bmp 或者 --images <没仔细研究,这里是要文件夹吧还是xyz.bmp>
在VS2019中调试运行的话,直接把项目的调试参数设置为上述格式即可,例如,
-i J:\BigData\default.bmp -m E:\mask_rcnn_resnet50_atrous_coco_2018_01_28\frozen_inference_graph.xml
我用纯CPU试了一下这个DEBUG模式,超级慢啊!在Release模式下,随便找了一张图,大约也花了好几秒,感觉不出来哪里加快了。
当然,要琢磨的地方还很多,这里暂不涉及这些细节了,先玩起来吧。