【初尝试】用Tensorflow训练一个简易深度神经网络

【初尝试】用Tensorflow训练一个简易深度神经网络

REFERENCE:

  1. 《Hands-On Machine Learning with Scikit-Learn and Tensorflow》- Chapter 10 - Traning a DNN using plain Tensorflow

  2. Tensorflow初探之MNIST数据集学习

碎碎念

由于这大半年来我一直在拖的项目需要用到深度神经网络,我一直在断断续续学习它。年初在Coursera上上了俩月的深度学习专项,做了一堆笔记——主要是概念认知和数学推导,实战演练过少。这不,暑假我就又来入门了。选用的书本是去年在爱大上机器学习的时候教授给的推荐书目,但那时候光把几个教授给的材料学完就已经花了我太多时间了,于是推荐书目自然只读了个绪论。那时候隐约觉得这是本好书,因为特别好懂——在爱大待完一个学期后深有感触。由于机器学习方面的知识对目前的我来说不是重点,而且我已经系统学过了,于是直接从和tensorflow有关的第九章开始读起。

学习工具使用的是IPad Pro 12.9 + MarginNote 3。此处强推该APP,虽然有些学习成本,但是绝对值得。

Code

100%从reference-1中手敲过来的,代码顺序为阅读顺序,个人贡献为狗屁不通的注释。

# construction Phase
import tensorflow as tf
import numpy as np

n_inputs = 28*28
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10

# input layer:
# 关于shape:很明显,此处用到的数据为二维(图片识别),行为实例,列为特征(一个像素为一个特征);在训练中,每个批次的实例数不定,只知道特征数,故如此设置。
X = tf.placeholder(tf.float32, shape=(None,n_inputs), name='X')     # act as the input layer
y = tf.placeholder(tf.int64, shape=(None), name='y')

# hidden layers:
# create neural network layer:
def neuron_layer(X, n_neurons, name, activation=None):
    # 以layer的名字创建一个name scope,则Tensorboard中的图会比较规整——同一层的节点会聚在一起
    with tf.name_scope(name):
        n_inputs = int(X.get_shape()[1])
        # 得到输入的units个数
        # X.get_shape()返回一个元组:(n_instances, n_inputs)
        # 得到W:
        # 计算标准差(standard deviation):以该计算结果为标准差有助于模型快速收敛。
        stddev = 2 / np.sqrt(n_inputs)
        # 用高斯分布来对权重进行随机初始化:
        init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)
        W = tf.Variable(init, name="weights")
        b = tf.Variable(tf.zeros([n_neurons]), name="biases")
        z = tf.matful(X,W) + b
        if activation=="relu":
            return tf.nn.relu(z)
        else:
            return z

# 创建DNN:
"""
with tf.name_scope("dnn"):
    hidden1 = neuron_layer(X, n_hidden1, "hidden1", activation="relu")
    hidden2 = neuron_layer(hidden1, n_hidden2, "hidden2", activation="relu")
    logits = neuron_layer(hidden2, n_outputs, "outputs")
"""

# 除自己创建以外,可直接使用tensorflow提供的函数来创建NNL:
# fully_connected创建全连接层,默认以Relu为activation function
from tensorflow.contrib.layers import fully_connected
with tf.name_scope("dnn"):
    hidden1 = fully_connected(X, n_hidden1, scope="hidden1")
    hidden2 = fully_connected(hidden1, n_hidden2, scope="hidden2")
    logits = fully_connected(hidden2, n_outputs, scope="outputs", activation_fn=None)

# define cost function:(use cross entropy,用以衡量一个神经网络输出向量和理想的向量的接近程度)
with tf.name_scope("loss"):
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
    # the function above is equivalent to applying the softmax function and then computing the cross entropy.
    # The function would be more efficient though.
    loss = tf.reduce_mean(xentropy, name="loss")

# define a GradientDescentOptimizer:
learning_rate = 0.01
with tf.name_scope("train"):
    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    training_op = optimizer.minimize(loss)

# evaluation:
# use accuracy as performance measure
with tf.name_scope("evaluation"):
    correct = tf.nn.in_top_k(logits, y, 1)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

# initialize all variables:
init = tf.global_variables_initializer()
saver = tf.train.Saver()

# Execution Phase:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data")

n_epochs = 400
batch_size = 50

with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        for iteration in range(mnist.train.num_examples // batch_size):
            X_batch, y_batch = mnist.train.next_batch(batch_size)
            sess.run(training_op, feed_dict={X:X_batch, y:y_batch})
        acc_train = accuracy.eval(feed_dict={X:X_batch, y:y_batch})
        acc_test = accuracy.eval(feed_dict={X:mnist.test.images, y:mnist.test.labels})
        print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
    save_path = saver.save(sess, "./my_model_final.ckpt")


在运行的时候发现关于Mnist数据集的导入出现了很多Warning,和reference-2中的类似。我的第一反应是所用到的库过时了?根据其中的评论建议,我去挂了代理,数据集就可以较快地被下载下来了(当然,方便起见,我还是去下载了数据压缩包,此处为下载地址)。

训练完毕后,测试集的准确率在97.97%左右。

预测自己的手写数字

To be continued...

posted @ 2020-08-03 01:43  兔至  阅读(208)  评论(0编辑  收藏  举报