深度学习笔记-Tensorflow(一)

参考链接地址:https://morvanzhou.github.io/tutorials/machine-learning/tensorflow/

1、TensorFlow 基础架构

TensorFlow是采用数据流图(data flow graphs)来计算, 所以首先我们得创建一个数据流流图, 然后再将我们的数据(数据以张量(tensor)的形式存在)放在数据流图中计算. 节点(Nodes)在图中表示数学操作,图中的线(edges)则表示在节点间相互联系的多维数据数组, 即张量(tensor). 训练模型时tensor会不断的从数据流图中的一个节点flow到另一节点, 这就是TensorFlow名字的由来.

 

 

1.6 激励函数 Activation Function

 

2、建造神经网络

2.1 构造添加一个神经层的函数

定义添加神经层的函数 add_layer(),它有四个参数:输入值、输入值大小、输出的大小和激励函数,我们设定默认的激励函数为None

2.2 构造神经网络并可视化训练

2.3 优化器optimizer

Tensorflow中的优化器会有很多不同的种类。最基本,也是最常用的一种就是GradientDescentOptimizer。在Google搜索输入“tensorflow optimizer”可以看到Tensorflow提供的7优化器

 

3Tensorboard 可视化好帮手

 

Step1 绘制图层与其中的参数

对输入层、隐藏层、loss函数、train_step进行图层

A 主要用到两个语法:

  1. 定义图层:with tf.name_scope()  (里面写名字,下面用缩进)
  2. 定义参数:增加参数变量的属性name

 

 

B 保存并执行绘图:

1 保存绘画:tf.summary.FileWriter() 运行程序,生成绘画文件

2 运行绘画1:在CMDtensorboard --logdir logs设定文件目录

3 打开Google Chromehttp://localhost:6006

Step2 可视化训练过程

1 Distributions——tf.summary.histogram()

制作对Weights biases的变化图标 distributions

Tensorflow中提供了tf.summary.histogram()方法,用来绘制图片,第一个参数是图表的名称,第二个参数是图表要记录的变量

2 Events——tf.summary.scalar()

Loss的变化图和之前设置的方法略有不同。Loss是在tensorboard event 下面的,这是由于我们使用的是tf.summary.scalar()方法。

观看loss的变化比较重要. 当你的loss呈下降的趋势,说明你的神经网络训练是有效果的。

Step3 给所有训练图‘合并’——tf.summary.merged_all()

接下来, 开始合并打包。 tf.summary.merge_all() 方法会对我们所有的 summaries 合并到一起

Step4 训练数据

以上这些仅仅可以记录很绘制出训练的图表, 但是不会记录训练的数据。 为了较为直观显示训练过程中每个参数的变化,我们每隔上50次就记录一次结果 , 同时我们也应注意, merged 也是需要run 才能发挥作用的

4、Classification 分类学习

Step1 首先准备数据库(MNIST

Step2 构建网络类型

Step3 建立loss函数

Step4 train方法

Step5 train并输出结果

Summary 完整代码

5、dropout 理解dropout是什么

6什么是CNN神经网络

定义weights(shape)bias(shape)conv2d(x, W)max_pool_2x2(x)

7、搭建CNN神经网络——添加神经层

conv1 layerconv2 layerfunc1 layerfunc2 layer

8保存和读取

——tf.train.Saver()

python代码如下:

 

####################### lesson 1 开始 ###########################

#import tensorflow as tf
#import numpy as np
#
#x_data = np.random.rand(100).astype(np.float32)
#y_data = x_data*0.1 + 0.3
#
#Weights = tf.Variable(tf.random_uniform([1],-1.0,1.0))
#biases = tf.Variable(tf.zeros([1]))
#
#y = Weights * x_data + biases
#
#loss = tf.reduce_mean(tf.square(y-y_data))
#
#optimizer = tf.train.GradientDescentOptimizer(0.5)
#train = optimizer.minimize(loss)
#
#init = tf.global_variables_initializer()
#
#sess = tf.Session()
#sess.run(init)
#
#for step in range(200):
#	sess.run(train)
#	if step % 20 == 0:
#		print(step,sess.run(Weights), sess.run(biases))

####################### lesson 1 结束 ###########################

####################### lesson 2 Session 开始 ###########################

#import tensorflow as tf
#
#matrix1 = tf.constant([[3, 2]])
#matrix2 = tf.constant([[2],
#											[2]])
#											
#product = tf.matmul(matrix1, matrix2)
#
##method 1
#sess = tf.Session()
#result1 = sess.run(product)
#sess.close()
#print(result1)
#
#
##method 2
#with tf.Session() as sess:
#	result2 = sess.run(product)
#print(result2)

####################### lesson 2 Session 结束 ###########################

####################### lesson 3 Variable 开始 ###########################

#import tensorflow as tf
#
#state = tf.Variable(0, name = "counter")
#one = tf.constant(1)
#
#new_value = tf.add(state, one)
#
#update = tf.assign(state, new_value)
#
#init = tf.global_variables_initializer()
#
#with tf.Session() as sess:
#	sess.run(init)
#	for _ in range(4):
#		sess.run(update)
#		print(sess.run(state))
#		
## 注意 直接写print(state) 不起作用!!
##1
##2
##3
##4

####################### lesson 3 Variable 结束 ###########################

####################### lesson 4 placeholder - feed_dict 开始 ###########################

#import tensorflow as tf
#
#input1 = tf.placeholder(tf.float32)
#input2 = tf.placeholder(tf.float32)
#
#output = tf.multiply(input1, input2)
#
#with tf.Session() as sess:
#	print(sess.run(output, feed_dict= {input1:[2.], input2:[4.]}))

####################### lesson 4 placeholder - feed_dict 开始 ###########################

####################### lesson 5 Activation Function 开始 ###########################

#激励函数运行时激活神经网络中某一部分神经元,将激活信息向后传入下一层的神经系统。激励函数的实质是非线性方程。
#Tensorflow 的神经网络里面处理较为复杂的问题时都会需要运用激励函数 activation function 。

####################### lesson 5 Activation Function 结束 ###########################

####################### lesson 6 add_layer 开始 ###########################
#import tensorflow as tf
#
#def add_layer(inputs, in_size, out_size, activation_function= None):
#	
#	Weights = tf.Variable(tf.random_normal([in_size, out_size]))
#	baises = tf.Variable(tf.zeros([1, out_size]) + 0.1)
#	
#	Wx_plus_b = tf.matmul(inputs, Weights) + baises
#	
#	if not activation_function:  # if activation_function is None:
#		output = Wx_plus_b
#	else:
#		output = activation_function(Wx_plus_b)
#		
#	return output

####################### lesson 6 add_layer 结束 ###########################

####################### lesson 7 create_NN 开始 ###########################

#import tensorflow as tf
#import numpy as np
#
#def add_layer(inputs, in_size, out_size, activation_function= None):
#	
#	Weights = tf.Variable(tf.random_normal([in_size, out_size]))
#	baises = tf.Variable(tf.zeros([1, out_size]) + 0.1)	
#	Wx_plus_b = tf.matmul(inputs, Weights) + baises	
#	if not activation_function:  # if activation_function is None:
#		output = Wx_plus_b
#	else:
#		output = activation_function(Wx_plus_b)
#		
#	return output
#	
#x_data = np.linspace(-1, 1, 300)[:,np.newaxis]
#noise = np.random.normal(0, 0.05, x_data.shape)
#y_data = np.square(x_data) - 0.5 + noise
#
#xs = tf.placeholder(tf.float32, [None, 1])
#ys = tf.placeholder(tf.float32, [None, 1])
#
#l1 = add_layer(xs, 1, 10, activation_function = tf.nn.relu)
#prediction = add_layer(l1, 10, 1, activation_function = None)
#
#loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))
#
#train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
#
#init = tf.global_variables_initializer()
#sess = tf.Session()
#sess.run(init)
#
#for step in range(1000):
#	sess.run(train_step, feed_dict = {xs:x_data, ys:y_data})
#	if step % 50 == 0:
#		print(step, sess.run(loss, feed_dict = {xs:x_data, ys:y_data}))

####################### lesson 7 create_NN 结束 ###########################

####################### lesson 8 create_NN and visulable 开始 ###########################
#
#import tensorflow as tf
#import numpy as np
#import matplotlib.pyplot as plt
#
#def add_layer(inputs, in_size, out_size, activation_function= None):
#	
#	Weights = tf.Variable(tf.random_normal([in_size, out_size]))
#	baises = tf.Variable(tf.zeros([1, out_size]) + 0.1)	
#	Wx_plus_b = tf.matmul(inputs, Weights) + baises	
#	if not activation_function:  # if activation_function is None:
#		output = Wx_plus_b
#	else:
#		output = activation_function(Wx_plus_b)
#		
#	return output
#	
#x_data = np.linspace(-1, 1, 300)[:,np.newaxis]
#noise = np.random.normal(0, 0.05, x_data.shape)
#y_data = np.square(x_data) - 0.5 + noise
#
#xs = tf.placeholder(tf.float32, [None, 1])
#ys = tf.placeholder(tf.float32, [None, 1])
#
#l1 = add_layer(xs, 1, 10, activation_function = tf.nn.relu)
#prediction = add_layer(l1, 10, 1, activation_function = None)
#
#loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))
#
#train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
#
#init = tf.global_variables_initializer()
#sess = tf.Session()
#sess.run(init)
#
#
#fig = plt.figure()
#ax = fig.add_subplot(1,1,1)
#ax.scatter(x_data, y_data)
#plt.ion()
##plt.show(block=False)
#
#for step in range(1000):
#	sess.run(train_step, feed_dict = {xs:x_data, ys:y_data})
##	try:
##		ax.lines.remove(lines[0])
##	except Exception:
##			pass
#	if step % 50 == 0:
##		print(step, sessS.run(loss, feed_dict = {xs:x_data}))
#		prediction_value = sess.run(prediction, feed_dict = {xs:x_data})
#		lines = ax.plot(x_data, prediction_value, '-r', lw=3)
#		plt.pause(0.1)
#		ax.lines.remove(lines[0])

####################### lesson 8 create_NN and visulable 结束 ###########################

####################### lesson 9 tensorboard_structure 开始 ###########################
#
#
#import tensorflow as tf
#import numpy as np
#import matplotlib.pyplot as plt
#
#
#def add_layer(inputs, in_size, out_size, activation_function=None):
#    with tf.name_scope('layer'):
#        with tf.name_scope('weights'):
#            Weights = tf.Variable(tf.random_normal([in_size, out_size]), name='W')
#        with tf.name_scope('biases'):
#            biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, name='b')
#        with tf.name_scope('Wx_plus_b'):
#            Wx_plus_b = tf.matmul(inputs, Weights) + biases
#            
#    if not activation_function:
#        outputs = Wx_plus_b
#    else:
#        outputs = activation_function(Wx_plus_b)
#    return outputs
#
#
#x_data = np.linspace(-1, 1, 300, dtype=np.float32)[:, np.newaxis]   # [:, np.newaxis] 转换成列向量
#noise = np.random.normal(0, 0.05, x_data.shape).astype(np.float32)
#y_data = np.square(x_data) - 0.5 + noise
#
#with tf.name_scope('inputs'):
#    xs = tf.placeholder(tf.float32, shape=[None, 1], name = 'x_input')
#    ys = tf.placeholder(tf.float32, shape=[None, 1], name = 'y_input')
#
#l1 = add_layer(xs, 1, 10, activation_function=tf.nn.sigmoid)
#
#prediction = add_layer(l1, 10, 1, activation_function=None)
#
#with tf.name_scope('loss'):
#    loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]), name='loss')
#
#with tf.name_scope('train'):
#    train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
#
#sess = tf.Session()
#writer = tf.summary.FileWriter("logs/", sess.graph)
## 初始化 tensorflow 变量,并用会话激活
#init = tf.global_variables_initializer()
#sess.run(init)

####################### lesson 9 tensorboard_structure 结束 ###########################

####################### lesson 10 tensorboard_training 开始 ###########################
#import tensorflow as tf
#import numpy as np
#import matplotlib.pyplot as plt
#
## 定义一个方法,用于构建神经层
#def add_layer(inputs ,
#              in_size,
#              out_size,n_layer,
#              activation_function=None):
#    ## add one more layer and return the output of this layer
#    layer_name='layer%s'%n_layer
#    with tf.name_scope('layer'):
#         with tf.name_scope('weights'):
#              Weights = tf.Variable(tf.random_normal([in_size, out_size]),name='W')
#              # tf.histogram_summary(layer_name+'/weights',Weights)
#              tf.summary.histogram(layer_name + '/weights', Weights) # tensorflow >= 0.12
#
#         with tf.name_scope('biases'):
#              biases = tf.Variable(tf.zeros([1,out_size])+0.1, name='b')
#              # tf.histogram_summary(layer_name+'/biase',biases)
#              tf.summary.histogram(layer_name + '/biases', biases)  # Tensorflow >= 0.12
#
#         with tf.name_scope('Wx_plus_b'):
#              Wx_plus_b = tf.add(tf.matmul(inputs,Weights), biases)
#
#         if activation_function is None:
#            outputs=Wx_plus_b
#         else:
#            outputs= activation_function(Wx_plus_b)
#
#         # tf.histogram_summary(layer_name+'/outputs',outputs)
#         tf.summary.histogram(layer_name + '/outputs', outputs) # Tensorflow >= 0.12
#
#    return outputs
#
## 主体方法
## 构建所需的数据. 这里的x_data 和y_data 并不是严格的一元二次函数的关系#,因为我们在这里加了一个noise,这样看起来更真实
#x_data = np.linspace(-1, 1, 300, dtype=np.float32)[:, np.newaxis]
#noise = np.random.normal(0, 0.05, x_data.shape).astype(np.float32)
#y_data = np.square(x_data) - 0.5 + noise
#
## 数据可视化,用散点图画出真实数据
#fig = plt.figure()
#ax = fig.add_subplot(1,1,1)     # 连续性画图,需要用到add_subplot(编号)
#ax.scatter(x_data, y_data)      # 获取点
#plt.ion() # plt原本会暂停程序,加上这句就不会暂停了
## plt.show(block = False)
#
## 定义输入占位符
## 利用占位符tf.placeholder()定义我们所需的神经网络的输入.这里的None代表无论输入有多少都可以,因为输入只有一个特征,所有这里是1.
#with tf.name_scope('inputs'):
#    xs = tf.placeholder(tf.float32, shape=[None, 1], name= 'x_input')
#    ys = tf.placeholder(tf.float32, shape=[None, 1], name= 'y_input')
#
## 默认一个输入(维度1),定义一个隐藏层,一个输出层
## l1 = add_layer(xs, 1, 10, activation_function=tf.nn.relu)
#l1 = add_layer(xs, 1, 10, n_layer=1, activation_function = tf.nn.relu)
## 增加输出层
#prediction = add_layer(l1, 10, 1,n_layer=2, activation_function = None)
## 定义误差函数
#with tf.name_scope('loss'):
#    loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]), name= 'loss')
#    tf.summary.scalar('loss', loss)	# tensorflow >= 0.12
#
## 选取梯度下降优化器进行训练,  很关键的一步,如何让机器学习提升它的准确率
## tf.train.GradientDescentOptimizer()中的值通常都小于1,这里取的是0.1,代表以0.1的效率来最小化误差loss
## optimizer = tf.train.GradientDescentOptimizer(0.1)
## train_step = optimizer.minimize(loss)
#with tf.name_scope('train'):
#    train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
#
## 定义会话控制Session
#sess = tf.Session()
#
## 开始合并打包
#merged = tf.summary.merge_all()
#
#writer = tf.summary.FileWriter('logs/', sess.graph)
#
## 初始化 tensorflow 变量,并用会话激活
#init = tf.global_variables_initializer()
#sess.run(init)
#
## 开始训练
#for step in range(1000):
#    sess.run(train_step, feed_dict={xs: x_data, ys: y_data})
#    if step % 50 ==0:
#        rs = sess.run(merged, feed_dict= {xs: x_data, ys: y_data})
#        writer.add_summary(rs, step)

####################### lesson 10 tensorboard_training 结束 ###########################

####################### lesson 10 classification 开始 ###########################

#import tensorflow as tf
#from tensorflow.examples.tutorials.mnist import input_data
#
## 添加层的方法
#def add_layer(inputs, in_size, out_size, n_layer, activation_function=None):
#    layer_name = 'layer%s' %(n_layer)
#    Weights = tf.Variable(tf.random_normal([in_size, out_size]))
#    biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, )
#    Wx_plus_b = tf.matmul(inputs, Weights) + biases
#    if activation_function == 0:
#        outputs = Wx_plus_b
#    else:
#        outputs = activation_function(Wx_plus_b)
#    return outputs
#
## 计算精确度的方法
#def compute_accuracy(v_xs, v_ys):
#    global prediction
#    y_pre = sess.run(prediction, feed_dict={xs: v_xs})
#    correct_prediction = tf.equal(tf.argmax(y_pre, 1), tf.argmax(v_ys, 1))
#    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#    result = sess.run(accuracy, feed_dict={xs: v_xs, ys: v_ys})
#    return result
#
## 装载数据
#mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
#
## 定义输入占位符
#xs = tf.placeholder(tf.float32, [None, 784])
#ys = tf.placeholder(tf.float32, [None, 10])
#
## 神经网络结构,一层
#prediction = add_layer(xs, 784, 10, n_layer=1, activation_function=tf.nn.softmax)
#
## loss函数
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction), reduction_indices=[1]))
#
## 训练方法 sgd
#train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
#
## 定义会话控制
#sess = tf.Session()
#
## 激活变量
#init = tf.global_variables_initializer()
#sess.run(init)
#
## 开始训练
#for step in range(1000):
#    batch_xs, batch_ys = mnist.train.next_batch(100)
#    sess.run(train_step, feed_dict={xs: batch_xs, ys: batch_ys})
#    if step % 50 == 0:
#        print(compute_accuracy(mnist.test.images, mnist.test.labels))
####################### lesson 10 classification 结束 ###########################

####################### lesson 11 dropout 开始 ###########################

#import tensorflow as tf
#from sklearn.datasets import load_digits
#from sklearn.model_selection import train_test_split
#from sklearn.preprocessing import LabelBinarizer
#
## load data
#digits = load_digits()
#X = digits.data  # 载入0——9的数字图片
#y = digits.target
#y = LabelBinarizer().fit_transform(y)       # 将y变成binary
#X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3)     # 分成训练集和测试集, test_size= .3 代表测试数据集占总数据集的比例为0.3
#
#
#def add_layer(inputs, in_size, out_size, layer_name, activation_function=None, ):
#    # add one more layer and return the output of this layer
#    Weights = tf.Variable(tf.random_normal([in_size, out_size]))
#    biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, )
#    Wx_plus_b = tf.matmul(inputs, Weights) + biases
#    Wx_plus_b = tf.nn.dropout(Wx_plus_b, keep_prob)
#    if activation_function is None:
#        outputs = Wx_plus_b
#    else:
#        outputs = activation_function(Wx_plus_b, )
#    tf.summary.histogram(layer_name + '/outputs', outputs)   # 观察outputs的histogram数据
#    return outputs
#
#
## define placeholder for inputs to network
#keep_prob = tf.placeholder(tf.float32)  # 定义 dropout 的值, 保持的可能性
#xs = tf.placeholder(tf.float32, [None, 64])  # 8x8  x_data 的数据为 64位
#ys = tf.placeholder(tf.float32, [None, 10])  # 输出数据 为 10 位
#
## add output layer
#l1 = add_layer(xs, 64, 100, 'l1', activation_function=tf.nn.tanh)       # 第一层(输入层和隐藏层)的inputs=xs,in_size=64,out_size=100,激励函数为tanh
#prediction = add_layer(l1, 100, 10, 'l2', activation_function=tf.nn.softmax)    # 第二层(隐藏层和输出层)的inputs=l1,in_size=100,out_size=10,激励函数为softmax
## the loss between prediction and real data
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),
#                                              reduction_indices=[1]))  # loss
#tf.summary.scalar('loss', cross_entropy)      # 用tensorboard中的event 观察 loss 的损失情况
#train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
#
#sess = tf.Session()
#merged = tf.summary.merge_all()
## summary writer goes in here
#train_writer = tf.summary.FileWriter("logs/train", sess.graph)
#test_writer = tf.summary.FileWriter("logs/test", sess.graph)
#
#sess.run(tf.global_variables_initializer())
#
#for i in range(500):
#    sess.run(train_step, feed_dict={xs: X_train, ys: y_train,  keep_prob: 0.4})
#    if i % 50 == 0:
#        # record loss
#        train_result = sess.run(merged, feed_dict={xs: X_train, ys: y_train,  keep_prob: 1})
#        test_result = sess.run(merged, feed_dict = {xs: X_train, ys: y_train, keep_prob: 1})
#        train_writer.add_summary(train_result)
#        test_writer.add_summary(test_result)

####################### lesson 11 dropout 结束 ###########################

####################### lesson 12 tf-18 CNN_1 开始 ###########################

#import tensorflow as tf
#from tensorflow.examples.tutorials.mnist import input_data
## number 1 to 10 data
#mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
#
#def compute_accuracy(v_xs, v_ys):
#    global prediction
#    y_pre = sess.run(prediction, feed_dict={xs: v_xs, keep_prob: 1})
#    correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1))
#    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#    result = sess.run(accuracy, feed_dict={xs: v_xs, ys: v_ys, keep_prob: 1})
#    return result
#
#def weight_variable(shape):
#    initial = tf.truncated_normal(shape, stddev= 0.1)
#    return tf.Variable(initial)
#
#def bias_variable(shape):    
#    initial = tf.constant(0.1, shape=shape)
#    return tf.Variable(initial)
#
#def conv2d(x, W):
#    return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding='SAME')
#
#def max_pool_2x2(x):
#    return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
#
## define placeholder for inputs to network
#xs = tf.placeholder(tf.float32, [None, 784]) # 28x28
#ys = tf.placeholder(tf.float32, [None, 10])
#keep_prob = tf.placeholder(tf.float32)
#
### conv1 layer ##
### conv2 layer ##
### func1 layer ##
### func2 layer ##
#
## the error between prediction and real data
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),
#                                              reduction_indices=[1]))       # loss
#train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
#
#sess = tf.Session()
## important step
#sess.run(tf.initialize_all_variables())
#
#for i in range(1000):
#    batch_xs, batch_ys = mnist.train.next_batch(100)
#    sess.run(train_step, feed_dict={xs: batch_xs, ys: batch_ys, keep_prob: 0.5})
#    if i % 50 == 0:
#        print(compute_accuracy(
#            mnist.test.images, mnist.test.labels))

####################### lesson 12 tf-18 CNN_1 结束 ###########################

####################### lesson 13 tf-19 CNN_2 开始 ###########################

#import tensorflow as tf
#from tensorflow.examples.tutorials.mnist import input_data
## number 1 to 10 data
#mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
#
#def compute_accuracy(v_xs, v_ys):
#    global prediction
#    y_pre = sess.run(prediction, feed_dict={xs: v_xs, keep_prob: 1})
#    correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1))
#    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#    result = sess.run(accuracy, feed_dict={xs: v_xs, ys: v_ys, keep_prob: 1})
#    return result
#
#def weight_variable(shape):
#    initial = tf.truncated_normal(shape, stddev= 0.1)
#    return tf.Variable(initial)
#
#def bias_variable(shape):    
#    initial = tf.constant(0.1, shape=shape)
#    return tf.Variable(initial)
#
#def conv2d(x, W):
#    return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding='SAME')
#
#def max_pool_2x2(x):
#    return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
#
## define placeholder for inputs to network
#xs = tf.placeholder(tf.float32, [None, 784]) # 28x28
#ys = tf.placeholder(tf.float32, [None, 10])
#keep_prob = tf.placeholder(tf.float32)
#x_image = tf.reshape(xs, [-1,28,28,1])
#
### conv1 layer ##
#W_conv1 = weight_variable([5,5, 1,32])
#b_conv1 = bias_variable([32])
#h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
#h_pool1 = max_pool_2x2(h_conv1)
#
### conv2 layer ##
#W_conv2 = weight_variable([5,5, 32,64])
#b_conv2 = bias_variable([64])
#h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
#h_pool2 = max_pool_2x2(h_conv2)
### func1 layer ##
#W_fc1 = weight_variable([7*7*64, 1024])
#b_fc1 = weight_variable([1024])
#h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
#h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
#h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
### func2 layer ##
#W_fc2 = weight_variable([1024, 10])
#b_fc2 = weight_variable([10])
#prediction = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
## the error between prediction and real data
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),
#                                              reduction_indices=[1]))       # loss
#train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
#
#sess = tf.Session()
## important step
#sess.run(tf.tf.global_variables_initializer())
#
#for i in range(1000):
#    batch_xs, batch_ys = mnist.train.next_batch(100)
#    sess.run(train_step, feed_dict={xs: batch_xs, ys: batch_ys, keep_prob: 0.5})
#    if i % 50 == 0:
#        print(compute_accuracy(
#            mnist.test.images, mnist.test.labels))
            
####################### lesson 13 tf-19 CNN_2 结束 ###########################

####################### lesson 14 tensorflow-saver.save_保存 开始 ###########################

#import tensorflow as tf
#
## Save to file
## remember to define the same dtype and shape when restore
#W = tf.Variable([[1,2,3],[3,4,5]], dtype=tf.float32, name="weights")
#b = tf.Variable([[1,2,3], dtype= tf.float32, name="biases"])
#
#init = tf.global_variables_initializer()
#
#saver = tf.train.Saver()
#
#with tf.Session as sess:
#    sess.run(init)
#    save_path = saver.save(sess, "my_net/save_net.ckpt")
#    print("Save to path:", save_path)

####################### lesson 14 tensorflow-saver.save_保存 结束 ###########################

####################### lesson 15 tensorflow-saver.restore_提取 开始 ###########################

import tensorflow as tf
import numpy as np

# restore variables
# redefine the same shape and same type for your variables

W = tf.Variable(np.arange(6).reshape((2,3)), dtype= tf.float32, name="weights")
b = tf.Variable(np.arange(3).reshape((1,3)), dtype= tf.float32, name="biases")

# not define init

saver = tf.train.Saver()

with tf.Session() as sess:
    saver.restore(sess, "my_net/save_net.ckpt")
    print("weights:", sess.run(W))
    print("biases:", sess.run(b))

####################### lesson 15 tensorflow-saver.restore_提取 结束 ###########################

 

  

 

 

posted @ 2018-09-05 21:05  肖邦、维也纳  阅读(416)  评论(0编辑  收藏  举报