slim.arg_scope()的使用

【https://blog.csdn.net/u013921430 转载】

     slim是一种轻量级的tensorflow库,可以使模型的构建,训练,测试都变得更加简单。slim库中对很多常用的函数进行了定义,slim.arg_scope()是slim库中经常用到的函数之一。函数的定义如下;

  1. @tf_contextlib.contextmanager
  2. def arg_scope(list_ops_or_scope, **kwargs):
  3. """Stores the default arguments for the given set of list_ops.
  4. For usage, please see examples at top of the file.
  5. Args:
  6. list_ops_or_scope: List or tuple of operations to set argument scope for or
  7. a dictionary containing the current scope. When list_ops_or_scope is a
  8. dict, kwargs must be empty. When list_ops_or_scope is a list or tuple,
  9. then every op in it need to be decorated with @add_arg_scope to work.
  10. **kwargs: keyword=value that will define the defaults for each op in
  11. list_ops. All the ops need to accept the given set of arguments.
  12. Yields:
  13. the current_scope, which is a dictionary of {op: {arg: value}}
  14. Raises:
  15. TypeError: if list_ops is not a list or a tuple.
  16. ValueError: if any op in list_ops has not be decorated with @add_arg_scope.
  17. """
  18. if isinstance(list_ops_or_scope, dict):
  19. # Assumes that list_ops_or_scope is a scope that is being reused.
  20. if kwargs:
  21. raise ValueError('When attempting to re-use a scope by suppling a'
  22. 'dictionary, kwargs must be empty.')
  23. current_scope = list_ops_or_scope.copy()
  24. try:
  25. _get_arg_stack().append(current_scope)
  26. yield current_scope
  27. finally:
  28. _get_arg_stack().pop()
  29. else:
  30. # Assumes that list_ops_or_scope is a list/tuple of ops with kwargs.
  31. if not isinstance(list_ops_or_scope, (list, tuple)):
  32. raise TypeError('list_ops_or_scope must either be a list/tuple or reused'
  33. 'scope (i.e. dict)')
  34. try:
  35. current_scope = current_arg_scope().copy()
  36. for op in list_ops_or_scope:
  37. key_op = _key_op(op)
  38. if not has_arg_scope(op):
  39. raise ValueError('%s is not decorated with @add_arg_scope',
  40. _name_op(op))
  41. if key_op in current_scope:
  42. current_kwargs = current_scope[key_op].copy()
  43. current_kwargs.update(kwargs)
  44. current_scope[key_op] = current_kwargs
  45. else:
  46. current_scope[key_op] = kwargs.copy()
  47. _get_arg_stack().append(current_scope)
  48. yield current_scope
  49. finally:
  50. _get_arg_stack().pop()

     如注释中所说,这个函数的作用是给list_ops中的内容设置默认值。但是每个list_ops中的每个成员需要用@add_arg_scope修饰才行。所以使用slim.arg_scope()有两个步骤:

  1. 使用@slim.add_arg_scope修饰目标函数
  2.  slim.arg_scope()为目标函数设置默认参数.

     例如如下代码;首先用@slim.add_arg_scope修饰目标函数fun1(),然后利用slim.arg_scope()为它设置默认参数。

  1. import tensorflow as tf
  2. slim =tf.contrib.slim
  3. @slim.add_arg_scope
  4. def fun1(a=0,b=0):
  5. return (a+b)
  6. with slim.arg_scope([fun1],a=10):
  7. x=fun1(b=30)
  8. print(x)

     运行结果为:

40

    平常所用到的slim.conv2d( ),slim.fully_connected( ),slim.max_pool2d( )等函数在他被定义的时候就已经添加了@add_arg_scope。以slim.conv2d( )为例;

  1. @add_arg_scope
  2. def convolution(inputs,
  3. num_outputs,
  4. kernel_size,
  5. stride=1,
  6. padding='SAME',
  7. data_format=None,
  8. rate=1,
  9. activation_fn=nn.relu,
  10. normalizer_fn=None,
  11. normalizer_params=None,
  12. weights_initializer=initializers.xavier_initializer(),
  13. weights_regularizer=None,
  14. biases_initializer=init_ops.zeros_initializer(),
  15. biases_regularizer=None,
  16. reuse=None,
  17. variables_collections=None,
  18. outputs_collections=None,
  19. trainable=True,
  20. scope=None):

     所以,在使用过程中可以直接slim.conv2d( )等函数设置默认参数。例如在下面的代码中,不做单独声明的情况下,slim.conv2d, slim.max_pool2d, slim.avg_pool2d三个函数默认的步长都设为1,padding模式都是'VALID'的。但是也可以在调用时进行单独声明。这种参数设置方式在构建网络模型时,尤其是较深的网络时,可以节省时间。

  1. with slim.arg_scope(
  2. [slim.conv2d, slim.max_pool2d, slim.avg_pool2d],stride = 1, padding = 'VALID'):
  3. net = slim.conv2d(inputs, 32, [3, 3], stride = 2, scope = 'Conv2d_1a_3x3')
  4. net = slim.conv2d(net, 32, [3, 3], scope = 'Conv2d_2a_3x3')
  5. net = slim.conv2d(net, 64, [3, 3], padding = 'SAME', scope = 'Conv2d_2b_3x3')

@修饰符     

     其实这种用法是python中常用到的。在python中@修饰符放在函数定义的上方,它将被修饰的函数作为参数,并返回修饰后的同名函数。形式如下;

  1. @fun_a #等价于fun_a(fun_b)
  2. def fun_b():

      这在本质上讲跟直接调用被修饰的函数没什么区别,但是有时候也有用处,例如在调用被修饰函数前需要输出时间信息,我们可以在@后方的函数中添加输出时间信息的语句,这样每次我们只需要调用@后方的函数即可。

  1. def funs(fun,factor=20):
  2. x=fun()
  3. print(factor*x)
  4. @funs #等价funs(add(),fator=20)
  5. def add(a=10,b=20):
  6. return(a+b)

 

posted @ 2019-12-13 09:39  悦悦的小屋  阅读(1539)  评论(0编辑  收藏  举报