tensorflow.sess.run&流程_我的理解

1、个人理解:

 1.1、tensorflow的 构建视图、构建操作... 都只是在预定义一些操作/一些占位,并没有实际的在跑代码,一直要等到 session.run 才会 实际的去执行某些代码

 1.2、我们 预定义的 一大堆 视图/操作 等等,并不一定所有的都会执行到,只有 session.run 使用到的才会执行到。否则的话 在 tensorflow视图里面仅仅是孤立的点 并没有数据流过它

 1.3、sess.run 可以执行某一个操作(变量),也可以执行某一个函数

 

2、资料:

 2.1、度娘:“sess.run”

  tensorflow学习笔记(1):sess.run()_站在巨人的肩膀上coding-CSDN博客.html(https://blog.csdn.net/LOVE1055259415/article/details/80011094

  sess.run 会调用哪些方法_百度知道.html(https://zhidao.baidu.com/question/1051057979950110419.html

 2.2、度娘:“tensor  tf.print”、“tensor  tf.print 返回值”

  tensorflow Debugger教程(二)——tf.Print()与tf.print()函数_MIss-Y的博客-CSDN博客.html(https://blog.csdn.net/qq_27825451/article/details/96100496

   ZC:传统做法(打印sess.run(...)的返回值) + tf.Print() + tf.print()

  tensorflow在函数中用tf.Print输出中间值的方法_sjtuxx_lee的博客-CSDN博客.html(https://blog.csdn.net/sjtuxx_lee/article/details/84571377

   ZC:tf.Print() ,“没有数据流过,就不会被执行”

  tensorflow笔记 tf.Print()_thormas1996的博客-CSDN博客.html(https://blog.csdn.net/thormas1996/article/details/81224405

   ZC:“需要注意的是tf.Print()只是构建一个op,需要run之后才会打印。”

 

3、测试代码:

  1 '''
  2 # 测试代码(1)
  3 
  4 import tensorflow as tf
  5 state = tf.Variable(0.0,dtype=tf.float32)
  6 one = tf.constant(1.0,dtype=tf.float32)
  7 new_val = tf.add(state, one)
  8 update = tf.assign(state, new_val)
  9 init = tf.initialize_all_variables()
 10 with tf.Session() as sess:
 11     sess.run(init)
 12     for _ in range(10):
 13         u,s = sess.run([update,state])
 14         print(s)
 15 '''
 16 
 17 ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ###
 18 
 19 '''
 20 # 测试代码(2)
 21 
 22 import tensorflow as tf
 23 state = tf.Variable(0.0,dtype=tf.float32)
 24 one = tf.constant(1.0,dtype=tf.float32)
 25 new_val = tf.add(state, one)
 26 update = tf.assign(state, new_val)# 返回tensor, 值为new_val
 27 update2 = tf.assign(state, 10000)# 没有fetch,便没有执行
 28 init = tf.initialize_all_variables()
 29 with tf.Session() as sess:
 30     sess.run(init)
 31     for _ in range(3):
 32         print(sess.run(update))
 33 '''
 34 
 35 ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ###
 36 
 37 # 测试代码(3)
 38 
 39 import sys
 40 import numpy
 41 # batches = numpy.zeros((32,1))
 42 batches = numpy.zeros((12,1))
 43 # print(batches)
 44 # print(type(batches))
 45 batches[0][0] = 1
 46 # print(batches)
 47 print(type(batches))
 48 print("batches.shape : ", batches.shape)
 49 print("batches[0][0].shape : ", batches[0][0].shape)
 50 # sys.exit()
 51 print("\n\n\n")
 52 
 53 
 54 import tensorflow as tf
 55 
 56 # tf.enable_eager_execution()
 57 
 58 # RNN的大小(隐藏节点的维度)
 59 rnn_size = 512
 60 
 61 tf.reset_default_graph()
 62 train_graph = tf.Graph()
 63 
 64 
 65 with train_graph.as_default():
 66     input_text = tf.placeholder(tf.int32, [None, None], name="input")
 67     targets = tf.placeholder(tf.int32, [None, None], name="targets")
 68     lr = tf.placeholder(tf.float32)
 69     #    tf.print(targets,[targets])
 70     
 71     input_data_shape = tf.shape(input_text)
 72     #    tf.print(input_data_shape)
 73     # 构建RNN单元并初始化
 74     # 将一个或多个BasicLSTMCells 叠加在MultiRNNCell中,这里我们使用2层LSTM cell
 75     cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size) for _ in range(2)])
 76     initial_state = cell.zero_state(input_data_shape[0], tf.float32)
 77     #    print("type(initial_state) : ", type(initial_state))
 78     initial_state = tf.identity(initial_state, name="initial_state")
 79 
 80     
 81 
 82     # tf.enable_eager_execution() # ZC: 感觉只要 tf.Print/tf.print是在打印占位符的信息,这句代码放在它们的前面的话,就会报错“AttributeError: 'Tensor' object has no attribute '_datatype_enum'”。  若是tf.Print/tf.print(它们的返回值是不同的)是在打印占位符的信息的话,都是需要sess.run的 ! !
 83     # op = tf.print("--> --> --> input_text: ", input_text, output_stream=sys.stderr)
 84     op = tf.Print(input_text, ['--> input_text: ', input_text])
 85     # tf.print("--> --> --> input_text: ", input_text, output_stream=sys.stderr)
 86 
 87     tf.enable_eager_execution()# ZC: 这一句放在 with的里面,这个位置是OK的。放在with的外面,会提示 要将这句代码放到程序开始的位置去
 88     x=tf.constant([2,3,4,5])
 89     y=tf.constant([20,30,40,50])
 90     z=tf.add(x,y)
 91  
 92     tf.print("x:",x, "y:",y,"z:",z, output_stream=sys.stderr)
 93 
 94 
 95 with tf.Session(graph=train_graph) as sess:
 96     sess.run(tf.global_variables_initializer())
 97     print("input_data_shape : ", input_data_shape)
 98     print("input_data_shape[0] : ", input_data_shape[0])
 99     print("initial_state.shape : ", initial_state.shape)
100     print("input_text : ", input_text)
101 
102     print("type(batches) : ", type(batches))
103     print("batches.shape : ", batches.shape)
104     
105     print()
106 
107 #    state = sess.run(initial_state, {input_text: batches[0][0]})
108 #    state, inputDataShape = sess.run([initial_state, input_data_shape], {input_text: batches[0][0]})
109 #    state, inputDataShape = sess.run([initial_state, input_data_shape], {input_text: batches})
110     state, inputDataShape, op = sess.run([initial_state, input_data_shape, op], feed_dict={input_text: batches})# ZC: 这里 是否使用 feed_dict,效果上是一样的
111 
112     print(">>> >>> >>> >>> >>> sess.run(...) 之后 <<< <<< <<< <<< <<<\n")
113     print("op : ", op)
114     print("state.shape : ", state.shape)
115 #    print("state[0][0] : ")
116 #    print(state[0][0])
117     print()
118 
119     print("inputDataShape : ", inputDataShape)
120     print("type(inputDataShape) : ", type(inputDataShape))
121     print("len(inputDataShape) : ", len(inputDataShape))
122     print("inputDataShape.shape : ", inputDataShape.shape)
123     print("inputDataShape[0] : ", inputDataShape[0])
124 
125 
126     print()
127     print("input_data_shape : ", input_data_shape)
128     print("input_data_shape[0] : ", input_data_shape[0])
129     print("initial_state.shape : ", initial_state.shape)
130     print("input_text : ", input_text)

 

 

4、

5、

posted @ 2020-01-18 22:28  pythonz  阅读(7084)  评论(0编辑  收藏  举报