tensorflow的测试代码
1、个人理解:
1.1、tensorflow的 构建视图、构建操作... 都只是在预定义一些操作/一些占位,并没有实际的在跑代码,一直要等到 session.run 才会 实际的去执行某些代码
1.2、我们 预定义的 一大堆 视图/操作 等等,并不一定所有的都会执行到,只有 session.run 使用到的才会执行到。否则的话 在 tensorflow视图里面仅仅是孤立的点 并没有数据流过它
1.3、sess.run 可以执行某一个操作(变量),也可以执行某一个函数
2、资料:
2.1、度娘:“sess.run”
tensorflow学习笔记(1):sess.run()_站在巨人的肩膀上coding-CSDN博客.html(https://blog.csdn.net/LOVE1055259415/article/details/80011094)
sess.run 会调用哪些方法_百度知道.html(https://zhidao.baidu.com/question/1051057979950110419.html)
2.2、度娘:“tensor tf.print”、“tensor tf.print 返回值”
tensorflow Debugger教程(二)——tf.Print()与tf.print()函数_MIss-Y的博客-CSDN博客.html(https://blog.csdn.net/qq_27825451/article/details/96100496)
ZC:传统做法(打印sess.run(...)的返回值) + tf.Print() + tf.print()
tensorflow在函数中用tf.Print输出中间值的方法_sjtuxx_lee的博客-CSDN博客.html(https://blog.csdn.net/sjtuxx_lee/article/details/84571377)
ZC:tf.Print() ,“没有数据流过,就不会被执行”
tensorflow笔记 tf.Print()_thormas1996的博客-CSDN博客.html(https://blog.csdn.net/thormas1996/article/details/81224405)
ZC:“需要注意的是tf.Print()只是构建一个op,需要run之后才会打印。”
''' # 测试代码(1) import tensorflow as tf state = tf.Variable(0.0,dtype=tf.float32) one = tf.constant(1.0,dtype=tf.float32) new_val = tf.add(state, one) update = tf.assign(state, new_val) init = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init) for _ in range(10): u,s = sess.run([update,state]) print(s) ''' ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ''' # 测试代码(2) import tensorflow as tf
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' state = tf.Variable(0.0,dtype=tf.float32) one = tf.constant(1.0,dtype=tf.float32) new_val = tf.add(state, one) update = tf.assign(state, new_val)# 返回tensor, 值为new_val update2 = tf.assign(state, 10000)# 没有fetch,便没有执行 init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for _ in range(3): print(sess.run(update)) ''' ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### # 测试代码(3) import sys import numpy # batches = numpy.zeros((32,1)) batches = numpy.zeros((12,1)) # print(batches) # print(type(batches)) batches[0][0] = 1 # print(batches) print(type(batches)) print("batches.shape : ", batches.shape) print("batches[0][0].shape : ", batches[0][0].shape) # sys.exit() print("\n\n\n") import tensorflow as tf # tf.enable_eager_execution() # RNN的大小(隐藏节点的维度) rnn_size = 512 tf.reset_default_graph() train_graph = tf.Graph() with train_graph.as_default(): input_text = tf.placeholder(tf.int32, [None, None], name="input") targets = tf.placeholder(tf.int32, [None, None], name="targets") lr = tf.placeholder(tf.float32) # tf.print(targets,[targets]) input_data_shape = tf.shape(input_text) # tf.print(input_data_shape) # 构建RNN单元并初始化 # 将一个或多个BasicLSTMCells 叠加在MultiRNNCell中,这里我们使用2层LSTM cell cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size) for _ in range(2)]) initial_state = cell.zero_state(input_data_shape[0], tf.float32) # print("type(initial_state) : ", type(initial_state)) initial_state = tf.identity(initial_state, name="initial_state") # tf.enable_eager_execution() # ZC: 感觉只要 tf.Print/tf.print是在打印占位符的信息,这句代码放在它们的前面的话,就会报错“AttributeError: 'Tensor' object has no attribute '_datatype_enum'”。 若是tf.Print/tf.print(它们的返回值是不同的)是在打印占位符的信息的话,都是需要sess.run的 ! ! # op = tf.print("--> --> --> input_text: ", input_text, output_stream=sys.stderr) op = tf.Print(input_text, ['--> input_text: ', input_text]) # tf.print("--> --> --> input_text: ", input_text, output_stream=sys.stderr) tf.enable_eager_execution()# ZC: 这一句放在 with的里面,这个位置是OK的。放在with的外面,会提示 要将这句代码放到程序开始的位置去 x=tf.constant([2,3,4,5]) y=tf.constant([20,30,40,50]) z=tf.add(x,y) tf.print("x:",x, "y:",y,"z:",z, output_stream=sys.stderr) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) print("input_data_shape : ", input_data_shape) print("input_data_shape[0] : ", input_data_shape[0]) print("initial_state.shape : ", initial_state.shape) print("input_text : ", input_text) print("type(batches) : ", type(batches)) print("batches.shape : ", batches.shape) print() # state = sess.run(initial_state, {input_text: batches[0][0]}) # state, inputDataShape = sess.run([initial_state, input_data_shape], {input_text: batches[0][0]}) # state, inputDataShape = sess.run([initial_state, input_data_shape], {input_text: batches}) state, inputDataShape, op = sess.run([initial_state, input_data_shape, op], feed_dict={input_text: batches})# ZC: 这里 是否使用 feed_dict,效果上是一样的 print(">>> >>> >>> >>> >>> sess.run(...) 之后 <<< <<< <<< <<< <<<\n") print("op : ", op) print("state.shape : ", state.shape) # print("state[0][0] : ") # print(state[0][0]) print() print("inputDataShape : ", inputDataShape) print("type(inputDataShape) : ", type(inputDataShape)) print("len(inputDataShape) : ", len(inputDataShape)) print("inputDataShape.shape : ", inputDataShape.shape) print("inputDataShape[0] : ", inputDataShape[0]) print() print("input_data_shape : ", input_data_shape) print("input_data_shape[0] : ", input_data_shape[0]) print("initial_state.shape : ", initial_state.shape) print("input_text : ", input_text)