139、TensorFlow Serving 实现模型的部署(二) TextCnn文本分类模型
昨晚终于实现了Tensorflow模型的部署 使用TensorFlow Serving
1、使用Docker 获取Tensorflow Serving的镜像,Docker在国内的需要将镜像的Repository地址设置为阿里云的加速地址,这个大家可以自己去CSDN上面找
然后启动docker
2、使用Tensorflow 的 SaveModelBuilder保存Tensorflow的计算图模型,并且设置Signature,
Signature主要用来标识模型的输入值的名称和类型
builder = saved_model_builder.SavedModelBuilder(export_path) classification_inputs = utils.build_tensor_info(cnn.input_x) classification_dropout_keep_prob = utils.build_tensor_info(cnn.dropout_keep_prob) classification_outputs_classes = utils.build_tensor_info(prediction_classes) classification_outputs_scores = utils.build_tensor_info(cnn.scores) classification_signature = signature_def_utils.build_signature_def( inputs={signature_constants.CLASSIFY_INPUTS: classification_inputs, signature_constants.CLASSIFY_INPUTS:classification_dropout_keep_prob }, outputs={ signature_constants.CLASSIFY_OUTPUT_CLASSES: classification_outputs_classes, signature_constants.CLASSIFY_OUTPUT_SCORES: classification_outputs_scores }, method_name=signature_constants.CLASSIFY_METHOD_NAME) tensor_info_x = utils.build_tensor_info(cnn.input_x) tensor_info_y = utils.build_tensor_info(cnn.predictions) tensor_info_dropout_keep_prob = utils.build_tensor_info(cnn.dropout_keep_prob) prediction_signature = signature_def_utils.build_signature_def( inputs={'inputX': tensor_info_x, 'input_dropout_keep_prob':tensor_info_dropout_keep_prob}, outputs={'predictClass': tensor_info_y}, method_name=signature_constants.PREDICT_METHOD_NAME) legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op') #add the sigs to the servable builder.add_meta_graph_and_variables( sess, [tag_constants.SERVING], signature_def_map={ 'textclassified': prediction_signature, signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: classification_signature, }, legacy_init_op=legacy_init_op) #save it! builder.save(True)
保存之后的计算图的结构可以从下面这里看见,下面这里只给出模型的signature部分,因为signature里面定义了你到时候call restful接口的参数名称和类型
signature_def { key: "serving_default" value { inputs { key: "inputs" value { name: "dropout_keep_prob:0" dtype: DT_FLOAT tensor_shape { unknown_rank: true } } } outputs { key: "classes" value { name: "index_to_string_Lookup:0" dtype: DT_STRING tensor_shape { dim { size: 1 } } } } outputs { key: "scores" value { name: "output/scores:0" dtype: DT_FLOAT tensor_shape { dim { size: -1 } dim { size: 2 } } } } method_name: "tensorflow/serving/classify" } } signature_def { key: "textclassified" value { inputs { key: "inputX" value { name: "input_x:0" dtype: DT_INT32 tensor_shape { dim { size: -1 } dim { size: 40 } } } } inputs { key: "input_dropout_keep_prob" value { name: "dropout_keep_prob:0" dtype: DT_FLOAT tensor_shape { unknown_rank: true } } } outputs { key: "predictClass" value { name: "output/predictions:0" dtype: DT_INT64 tensor_shape { dim { size: -1 } } } } method_name: "tensorflow/serving/predict" } } }
从上面的Signature定义可以看出 到时候call restfull 接口需要传两个参数,
int32类型的名称为inputX参数
float类型名称为input_drop_out_keep_prob的参数
上面的protocol buffer 中
textclassified表示使用TextCnn卷积神经网络来进行预测,然后预测功能的名称叫做textclassified
3、将模型部署到Tensorflow Serving 上面
首先把模型通过工具传输到docker上面
模型的结构如下
传到docker上面,然后在外边套一个文件夹名字起位模型的名字,叫做
text_classified_model
然后执行下面这条命令运行tensorflow/serving
docker run -p 8500:8500 --mount type=bind,source=/home/docker/model/text_classified_model,target=/mo dels/text_classified_model -e MODEL_NAME=text_classified_model -t tensorflow/serving
source表示模型在docker上面的路径
target表示模型在docker中TensorFlow/serving container上面的路径
然后输入如下所示
上面显示运行了两个接口一个是REST API 接口,端口是8501
另一个是gRPC接口端口是8500
gRPC是HTTP/2协议,REST API 是HTTP/1协议
区别是gRPC只有POST/GET两种请求方式
REST API还有其余很多种 列如 PUT/DELETE 等
4、客户端调用gPRC接口
需要传两个参数,
一个是
inputX
另一个是
input_dropout_keep_prob
''' Created on 2018年10月17日 @author: 95890 ''' """Send text to tensorflow serving and gets result """ # This is a placeholder for a Google-internal import. from grpc.beta import implementations import tensorflow as tf import data_helpers from tensorflow_serving.apis import predict_pb2 from tensorflow_serving.apis import prediction_service_pb2 from tensorflow.contrib import learn import numpy as np tf.flags.DEFINE_string("positive_data_file", "./data/rt-polaritydata/rt-polarity.pos", "Data source for the positive data.") tf.flags.DEFINE_string("negative_data_file", "./data/rt-polaritydata/rt-polarity.neg", "Data source for the negative data.") tf.flags.DEFINE_string('server', '192.168.99.100:8500', 'PredictionService host:port') FLAGS = tf.flags.FLAGS x_text=[] y=[] max_document_length=40 def main(_): testStr =["wisegirls is its low-key quality and genuine"] if x_text.__len__()==0: x_text, y = data_helpers.load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file) max_document_length = max([len(x.split(" ")) for x in x_text]) vocab_processor = learn.preprocessing.VocabularyProcessor(max_document_length) vocab_processor.fit(x_text) x = np.array(list(vocab_processor.fit_transform(testStr))) host, port = FLAGS.server.split(':') channel = implementations.insecure_channel(host, int(port)) stub = prediction_service_pb2.beta_create_PredictionService_stub(channel) request = predict_pb2.PredictRequest() request.model_spec.name = "text_classified_model" request.model_spec.signature_name = 'textclassified' dropout_keep_prob = np.float(1.0) request.inputs['inputX'].CopyFrom( tf.contrib.util.make_tensor_proto(x, shape=[1,40],dtype=np.int32)) request.inputs['input_dropout_keep_prob'].CopyFrom( tf.contrib.util.make_tensor_proto(dropout_keep_prob, shape=[1],dtype=np.float)) result = stub.Predict(request, 10.0) # 10 secs timeout print(result) if __name__ == '__main__': tf.app.run()
调用的结果如下所示
outputs { key: "predictClass" value { dtype: DT_INT64 tensor_shape { dim { size: 1 } } int64_val: 1 } } model_spec { name: "text_classified_model" version { value: 1 } signature_name: "textclassified" }
从上面的结果可以看出,我们传入了一句话
wisegirls is its low-key quality and genuine
分类的结果
predictClass
int64_val: 1
分成第一类
这个真的是神经网络的部署呀。
啦啦啦 , Tensorflow真的很牛,上至浏览器,下到手机,一次训练,一次导出。处处运行。
没有不敢想,只有不敢做
The Full version can be find here
https://github.com/weizhenzhao/TextCNN_Tensorflow_Serving/tree/master
Thanks
WeiZhen