segnet 编译与测试
segnet 编译与测试
参考:http://sunxg13.github.io/2015/09/10/caffe/
http://m.blog.csdn.net/lemianli/article/details/76687508
http://blog.h5min.cn/u010069760/article/details/75258539
(注意:nakefile而非makefile.config)
1、编译caffe-segnet:
1.1下载caffe-segnet(适用于segnet的caffe版本,下成scaffe)
git clone https://github.com/alexgkendall/caffe-segnet
1.2更改一些编译选项:
Makefile.config:
#USE_CUDNN := 1 (scaffe不支持高版本cudnn)
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial
WITH_PYTHON_LAYER := 1(需要PYTHON_LAYER)
修改Makefile,在
LIBRARIES += glog gflags protobuf leveldb snappy \
lmdb boost_system hdf5_hl hdf5 m \
opencv_core opencv_highgui opencv_imgproc opencv_imgcodecs
处加入后面的opencv_imgcodecs,因为opencv3.0.0把imread相关函数放到imgcodecs.lib中了(原来是imgproc.lib
make -j8
make pycaffe
1.3、下载segnet,建议放置在caffe-segnet文件中:
git clone https://github.com/alexgkendall/SegNet-Tutorial
文件很大,因为其中包含一些图片
下载模型文件:[http://mi.eng.cam.ac.uk/~agk34/resources/SegNet/segnet_weights_driving_webdemo.caffemodel
这是用于摄像头的模型文件,不过图片也能使用,不过需要改变测试文件
使用文件中自带的图片测试结果
Example——Moudel下有相应的模型描述文件prototxt
Scripts文件夹下有相应的测试文件:*p
更改相应路径即可显示结果,这里我更改了使用上面那个摄像头模型的测试文件,可以用于测试单张图片:
# -*- coding: utf-8 -* import numpy as np import matplotlib.pyplot as plt import os.path import scipy import argparse import math import cv2 import sys import time sys.path.append('/usr/local/lib/python2.7/site-packages') # Make sure that caffe is on the python path: caffe_root = '/media/lbk/娱乐/seg-env/caffe-segnet/' sys.path.insert(0, caffe_root + 'python') import caffe # Import arguments #deploy='Example_Models/segnet_model_driving_webdemo.prototxt' #weights='Example_Moudels/segnet_weights_driving_webdemo.caffemodel' #colours='Scripts/camvid12.png' #net = caffe.Net(deploy,weights,caffe.TEST) # Import arguments parser = argparse.ArgumentParser() parser.add_argument('--model', type=str, required=True) parser.add_argument('--weights', type=str, required=True) parser.add_argument('--colours', type=str, required=True) args = parser.parse_args() net = caffe.Net(args.model, args.weights, caffe.TEST) #caffe.set_mode_gpu() input_shape = net.blobs['data'].data.shape output_shape = net.blobs['argmax'].data.shape label_colours = cv2.imread(args.colours).astype(np.uint8) #cv2.namedWindow("Input") #cv2.namedWindow("SegNet") cap = cv2.VideoCapture(0) # Change this to your webcam ID, or file name for your video file rval = True frame = cv2.imread('/media/lbk/娱乐/seg-env/caffe-segnet/segnet/Example_Models/123.png') frame = cv2.resize(frame, (input_shape[3],input_shape[2])) input_image = frame.transpose((2,0,1)) # input_image = input_image[(2,1,0),:,:] # May be required, if you do not open your data with opencv input_image = np.asarray([input_image]) out = net.forward_all(data=input_image) segmentation_ind = np.squeeze(net.blobs['argmax'].data) segmentation_ind_3ch = np.resize(segmentation_ind,(3,input_shape[2],input_shape[3])) segmentation_ind_3ch = segmentation_ind_3ch.transpose(1,2,0).astype(np.uint8) segmentation_rgb = np.zeros(segmentation_ind_3ch.shape, dtype=np.uint8) cv2.LUT(segmentation_ind_3ch,label_colours,segmentation_rgb) #这里应该变成小数存储了,看来opencv对于小数也是热图显示,但是保存还是黑白的图 segmentation_rgb = segmentation_rgb.astype(float)/255 #cv2.imwrite('output.jpg',segmentation_rgb) #cv2.imshow("Input", frame) #cv2.imshow("SegNet", segmentation_rgb) #cv2.imwrite('output.jpg',segmentation_rgb) #这里使用plt显示与保存,比cv2好点,并且不会出现进程卡住的情况 plt.imshow(segmentation_rgb) plt.savefig('output.png') plt.show()
运行:进入到SegNet-Tutorial-master文件夹
python Scripts/*.py --model Example_Models/segnet_model_driving_webdemo.prototxt --weights Example_Models/segnet_weights_driving_webdemo.caffemodel --colours Scripts/camvid12.png
即可得到结果