TFboy养成记 tensorboard

首先介绍几个用法:

with tf.name_scope(name = "inputs"):

这个是用于区分区域的。如,train,inputs等。

 

xs = tf.placeholder(tf.float32,[None,1],name = "x_input")

name用于对节点的命名。

merged = tf.summary.merge_all()

注:这里很多代码可能跟莫烦老师的代码并不一样,主要是由于版本变迁,tensorflow很多函数改变。

这一步很重要!!!如果你想看loss曲线,一定要记得家上这一步。还有

1 with tf.name_scope("loss"):
2     loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-l2),reduction_indices=[1]))
3     tf.summary.scalar("loss",loss)

 

对于你绘制的曲线。一定要记得是分成各个点来绘制的。所以在最后也会有改动,每一个都要加到summary 里面。

 

所有代码:

 

1 for i in range(1000):
2    sess.run(train_step, feed_dict={xs:x_data, ys:y_data})
3    if i%50 == 0:
4       rs = sess.run(merged,feed_dict={xs:x_data,ys:y_data})
5       writer.add_summary(rs, i)

至于启动tensorboard:

首先在cmd或者terminal切换到当前文件所在的文件夹,然后输入:
tensorboard --logdir=logs/(貌似不需要斜杠也可以可以试一下),当然 这里直接输入路径也是可以的。

最后会给你一个网址:0.0.0.0:6006还是什么。很多windows同学打不开,那就把前面的ip直接换成localhost即可

所有代码:

 1 # -*- coding: utf-8 -*-
 2 """
 3 Created on Wed Jun 14 17:26:15 2017
 4 
 5 @author: Jarvis
 6 """
 7 
 8 import tensorflow as tf
 9 import numpy as np
10 def addLayer(inputs,inSize,outSize,level,actv_func = None):
11     layername = "layer%s"%(level)
12     with tf.name_scope("Layer"):
13         with tf.name_scope("Weights"):
14             Weights = tf.Variable(tf.random_normal([inSize,outSize]),name="W")
15         #    tf.summary.histogram(layername+"/Weights",Weights)
16         with tf.name_scope("bias"):
17             bias = tf.Variable(tf.zeros([1,outSize]),name = "bias")
18        #     tf.summary.histogram(layername+"/bias",bias)
19         
20         with tf.name_scope("Wx_plus_b"):
21             Wx_plus_b = tf.matmul(inputs,Weights)+bias
22         #    tf.summary.histogram(layername+"/Wx_plus_b",Wx_plus_b)
23         if actv_func == None:
24             outputs = Wx_plus_b
25         else:
26             outputs = actv_func(Wx_plus_b)
27         tf.summary.histogram(layername+"\outputs",outputs)
28         return outputs
29 x_data = np.linspace(-1,1,300)[:,np.newaxis]
30 noise=  np.random.normal(0, 0.05, x_data.shape).astype(np.float32)
31 y_data = np.square(x_data)+0.5+noise
32 with tf.name_scope("inputs"):
33     xs = tf.placeholder(tf.float32,[None,1],name = "x_input")
34     ys = tf.placeholder(tf.float32,[None,1],name = "y_input")
35 
36 l1 = addLayer(xs,1,10,level = 1,actv_func=tf.nn.relu)
37 l2 = addLayer(l1,10,1,level=2,actv_func=None)
38 with tf.name_scope("loss"):
39     loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-l2),reduction_indices=[1]))
40     tf.summary.scalar("loss",loss)
41 with tf.name_scope("train"):
42     train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
43     
44 
45 
46 sess = tf.Session()
47 merged = tf.summary.merge_all()
48 writer = tf.summary.FileWriter("logs/",sess.graph)#很关键一定要在run之前把这个加进去
49 
50 sess.run(tf.global_variables_initializer())
51 
52 for i in range(1000):
53    sess.run(train_step, feed_dict={xs:x_data, ys:y_data})
54    if i%50 == 0:
55       rs = sess.run(merged,feed_dict={xs:x_data,ys:y_data})
56       writer.add_summary(rs, i)
57     
58     
View Code

很多用spyder的同学可能老师报一些莫名奇妙的错误,你不妨试试重启一下kernel试试

posted @ 2017-06-17 11:43  不说话的汤姆猫  阅读(611)  评论(0编辑  收藏  举报