tensorflow高级库

 

1、tf.app.flags

      tf定义了tf.app.flags,用于支持接受命令行传递参数,相当于接受argv。tf.app.flags.DEFINE_xxx()就是添加命令行的optional argument(可选参数),而tf.app.flags.FLAGS可以从对应的命令行参数取出参数。

import tensorflow as tf

# 第一个是参数名称,第二个参数是默认值,第三个是参数描述
tf.app.flags.DEFINE_float('float_name', 0.01, 'input a float')
tf.app.flags.DEFINE_string('str_name', 'def_v_1', "descrip1")
tf.app.flags.DEFINE_integer('int_name', 10, "descript2")
tf.app.flags.DEFINE_boolean('bool_name', False, "descript3")

FLAGS = tf.app.flags.FLAGS


# 必须带参数,否则:'TypeError: main() takes no arguments (1 given)';   main的参数名随意定义,无要求
def main(_):
	print(FLAGS.float_name)
	print(FLAGS.str_name)
	print(FLAGS.int_name)
	print(FLAGS.bool_name)


if __name__ == '__main__':
	tf.app.run()  # 执行main函数

  

执行:

 

(tf_learn) [@l_106 ~/ssd-balancap]$ python exc2.py 
0.01
def_v_1
10
False
(tf_learn) [@l_106 ~/ssd-balancap]$ python exc2.py --float_name 0.6 --str_name test_str --int_name 99 --bool_name True
0.6
test_str
99
True

 

2、slim

 

导入

import tensorflow.contrib.slim as slim

arg_scope:用来控制每一层的默认超参数的。

 

定义变量

      变量分为两类:模型变量和局部变量。局部变量是不作为模型参数保存的,而模型变量会再save的时候保存下来。这个玩过tensorflow的人都会明白,诸如global_step之类的就是局部变量。slim中可以写明变量存放的设备,正则和初始化规则。还有获取变量的函数也需要注意一下,get_variables是返回所有的变量。

定义卷积层:

input = [1,224,224,3]
#tensorflow
with tf.name_scope('conv1_1') as scope:
  kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,
                                           stddev=1e-1), name='weights')
  conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME')
  biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),
                       trainable=True, name='biases')
  bias = tf.nn.bias_add(conv, biases)
  conv1 = tf.nn.relu(bias, name=scope)
#slim
net = slim.conv2d(input, 128, [3, 3], scope='conv1_1')

repeat操作:

repeat操作可以减少代码量。

net = ''
#原版
net = slim.conv2d(net, 256, [3, 3], scope='conv3_1')
net = slim.conv2d(net, 256, [3, 3], scope='conv3_2')
net = slim.conv2d(net, 256, [3, 3], scope='conv3_3')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
#repeat简化版
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool2')

stack操作:

stack是处理卷积核或者输出不一样的情况。

#普通版
x = slim.fully_connected(x, 32, scope='fc/fc_1')
x = slim.fully_connected(x, 64, scope='fc/fc_2')
x = slim.fully_connected(x, 128, scope='fc/fc_3')
#stack简化版
slim.stack(x, slim.fully_connected, [32, 64, 128], scope='fc')

#普通版:
x = slim.conv2d(x, 32, [3, 3], scope='core/core_1')
x = slim.conv2d(x, 32, [1, 1], scope='core/core_2')
x = slim.conv2d(x, 64, [3, 3], scope='core/core_3')
x = slim.conv2d(x, 64, [1, 1], scope='core/core_4')
#stack简化版:
slim.stack(x, slim.conv2d, [(32, [3, 3]), (32, [1, 1]), (64, [3, 3]), (64, [1, 1])], scope='core')

argscope:

#普通版
net = slim.conv2d(inputs, 64, [11, 11], 4, padding='SAME',
                  weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
                  weights_regularizer=slim.l2_regularizer(0.0005), scope='conv1')
net = slim.conv2d(net, 128, [11, 11], padding='VALID',
                  weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
                  weights_regularizer=slim.l2_regularizer(0.0005), scope='conv2')
net = slim.conv2d(net, 256, [11, 11], padding='SAME',
                  weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
                  weights_regularizer=slim.l2_regularizer(0.0005), scope='conv3')
#arg_scope简化版
with slim.arg_scope([slim.conv2d], padding='SAME',
					  weights_initializer=tf.truncated_normal_initializer(stddev=0.01),\
					  weights_regularizer=slim.l2_regularizer(0.0005)):
	net = slim.conv2d(inputs, 64, [11, 11], scope='conv1')
	net = slim.conv2d(net, 128, [11, 11], padding='VALID', scope='conv2')
	net = slim.conv2d(net, 256, [11, 11], scope='conv3')

arg_scope的作用范围内,是定义了指定层的默认参数,若想特别指定某些层的参数,可以重新赋值(相当于重写),如上倒数第二行代码。那如果除了卷积层还有其他层呢?那就要如下定义:

 

posted on 2017-11-09 11:48  执剑长老  阅读(767)  评论(0编辑  收藏  举报