程序1


任务描述: x = 3.0, y = 100.0, 运算公式 x×W+b = y,求 W和b的最优解。


使用tensorflow编程实现:

#-*- coding: utf-8 -*-)
import tensorflow as tf

# 声明占位变量x、y
x = tf.placeholder("float",shape=[None,1])
y = tf.placeholder("float",[None,1])

# 声明变量
W = tf.Variable(tf.zeros([1,1]))
b = tf.Variable(tf.zeros([1]))

# 操作
result = tf.matmul(x,W) +b

# 损失函数
lost = tf.reduce_sum(tf.pow((result-y),2))

# 优化
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(lost)

with tf.Session() as sess:
    # 初始化变量
    sess.run(tf.global_variables_initializer())
    # 这里x、y给固定的值
    x_s = [[3.0]]
    y_s = [[100.0]]

    step =0
    while(True):
        step += 1
        feed = {x: x_s, y: y_s}
        # 通过sess.run执行优化
        sess.run(train_step, feed_dict=feed)
        if step % 50 ==0:
            print 'step: {0},  loss: {1}'.format(step,sess.run(lost,feed_dict=feed))
            if sess.run(lost,feed_dict=feed) < 0.00001 or step >3000:
                print ''
                print 'final loss is: {}'.format(sess.run(lost,feed_dict=feed))
                print 'final result of {0} =  {1}'.format('x×W+b',3.0*sess.run(W)+sess.run(b))
                print("W : %f" % sess.run(W))
                print("b : %f" % sess.run(b))
                break


输出:

step: 50,  loss: 1326.19543457
step: 100,  loss: 175.879058838
step: 150,  loss: 23.325012207
step: 200,  loss: 3.09336590767
step: 250,  loss: 0.410243988037
step: 300,  loss: 0.0544071868062
step: 350,  loss: 0.00721317622811
step: 400,  loss: 0.000956638017669
step: 450,  loss: 0.000126981700305
step: 500,  loss: 1.68478582054e-05
step: 550,  loss: 2.23610550165e-06

final loss is: 2.23610550165e-06
final result of x×W+b =  [[ 99.99850464]]
W : 29.999552
b : 9.999846


任务很简单,初始学习率设置为0.0001,550论迭代后优化完成,如果初始学习率设置的高一点,如0.005,会加快收敛。求得 W = 29.999552, b = 9.999846
x×W+b = 3.0×29.999552+9.999846 = 99.999846 ,约等于目标 100.0 了。



程序2


任务描述: x、y是二维矩阵, x = [[1.0, 3.0], [3.2, 4.]], y = [[6.0, 3.0], [5.2, 43.]], 运算公式 x×W+b = y,求 W和b的最优值。

# -*- coding: utf-8 -*-)
import tensorflow as tf

# 声明占位变量x、y, 形状为[2,2]
x = tf.placeholder("float",shape=[2,2])
y = tf.placeholder("float",[2,2])

# 声明变量
W = tf.Variable(tf.zeros([2,2]))
b = tf.Variable(tf.zeros([1]))

# 操作
result = tf.matmul(x,W) +b

# 损失函数
lost = tf.reduce_sum(tf.pow((y-result),2))

# 优化
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(lost)


with tf.Session() as sess:
    # 初始化变量
    sess.run(tf.global_variables_initializer())

    # 这里x、y给固定的值
    x_s = [[1.0, 3.0], [3.2, 4.]]
    y_s = [[6.0, 3.0], [5.2, 43.]]

    step = 0
    while(True):
        step += 1
        feed = {x: x_s, y: y_s}

        # 通过sess.run执行优化
        sess.run(train_step, feed_dict=feed)

        if step % 500 == 0:
            print 'step: {0},  loss: {1}'.format(step, sess.run(lost, feed_dict=feed))
            if sess.run(lost, feed_dict=feed) < 0.00001 or step > 10000:
                print ''
                print 'final loss is: {}'.format(sess.run(lost, feed_dict=feed))
                print("W : {}".format(sess.run(W)))
                print("b : {}".format( sess.run(b)))

                result1 = tf.matmul(x_s, W) + b
                print 'final result is: {}'.format(sess.run(result1))
                print 'final error is: {}'.format(sess.run(result1)-y_s)

                break

输出:  

step: 500,  loss: 59.3428421021
step: 1000,  loss: 8.97444725037
step: 1500,  loss: 1.40089821815
step: 2000,  loss: 0.22409722209
step: 2500,  loss: 0.036496296525
step: 3000,  loss: 0.00602086028084
step: 3500,  loss: 0.00100283313077
step: 4000,  loss: 0.000168772909092
step: 4500,  loss: 2.86664580926e-05
step: 5000,  loss: 4.90123693453e-06

final loss is: 4.90123693453e-06
W : [[ -2.12640238  20.26368904]
 [  3.87999701  -4.58247852]]
b : [-3.51479006]
final result is: [[  5.99879789   3.00146341]
 [  5.20070982  42.99909973]]
final error is: [[-0.00120211  0.00146341]
 [ 0.00070982 -0.00090027]]


程序3    (增加可视化)


任务描述:  X是128个二维数组[X1, X2], Y是X中X1和X2 相关的函数,Y = x1 +10*x2, 运算公式  Y = (X*w1+b1)*(w2)+b2,求 w1、b1、w2、b2的最优值。

# -*- coding: utf-8 -*-)
import tensorflow as tf
from numpy.random import RandomState

# 定义训练数据batch的大小
batch_size = 8

# 在shape上使用None表示该维度的具体数值不定
x = tf.placeholder(tf.float32, shape=(None, 2), name='x-input')
y_ = tf.placeholder(tf.float32, shape=(None, 1), name='y-input')

# 定义神经网络的参数
w1 = tf.Variable(tf.random_normal([2, 3], stddev=1, seed=1))
w2 = tf.Variable(tf.random_normal([3, 1], stddev=1, seed=1))
bias1 = tf.Variable(tf.random_normal([3], stddev=1, seed=1))
bias2 = tf.Variable(tf.random_normal([1], stddev=1, seed=1))

# 定义神经网络前向传播的过程,即操作
a = tf.nn.relu(tf.matmul(x, w1) + bias1)
y = tf.nn.relu(tf.matmul(a, w2) + bias2)

# 定义损失函数和反向传播算法
loss = tf.reduce_sum(tf.pow((y-y_),2))
train_step = tf.train.AdamOptimizer(0.001).minimize(loss)  # 梯度下降优化算法

# produce the data,通过随机数生成一个模拟数据集
rdm = RandomState(seed = 1)   # 设置seed = 1 ,使每次生成的随机数一样
dataset_size = 128
X = rdm.rand(dataset_size, 2)
Y = [[x1 +10*x2] for (x1, x2) in X]

# creare a session,创建一个会话来运行TensorFlow程序
with tf.Session() as sess:

    # 定义命名空间,使用tensorboard进行可视化
    with tf.name_scope("inputs"):
        tf.summary.histogram('X', X)

    with tf.name_scope("target"):
        tf.summary.histogram('Target', Y)

    with tf.name_scope("outputs"):
        tf.summary.histogram('Y', y)

    with tf.name_scope('loss'):
        tf.summary.histogram('Loss', loss)

    summary_op = tf.summary.merge_all()
    summary_writer = tf.summary.FileWriter('./log/', tf.get_default_graph())

    # 初始化变量
    sess.run(tf.global_variables_initializer())

    # 设定训练的轮数
    STEPS = 10000
    for i in range(STEPS+1):
        # get batch_size samples data to train,每次选取batch_size个样本进行训练
        start = (i * batch_size) % dataset_size
        end = min(start + batch_size, dataset_size)

        # 通过选取的样本训练神经网络并更新参数
        sess.run(train_step, feed_dict={x: X[start: end], y_: Y[start: end]})
        if i % 500 == 0:
            # 每隔一段时间计算在所有数据上的loss并输出
            total_cross_entropy,summary = sess.run([loss,summary_op], feed_dict={x: X, y_: Y})
            print ("After %d training steps, loss on all data is %g" % (i, total_cross_entropy))
            # 在训练结束之后,输出神经网络的参数

            log_writer = tf.summary.FileWriter('./log/')
            log_writer.add_summary(summary, i)

    print sess.run(w1)
    print sess.run(w2)

输出:

After 0 training steps, loss on all data is 2599.94
After 500 training steps, loss on all data is 873.661
After 1000 training steps, loss on all data is 667.791
After 1500 training steps, loss on all data is 483.075
After 2000 training steps, loss on all data is 300.244
After 2500 training steps, loss on all data is 159.576
After 3000 training steps, loss on all data is 74.0152
After 3500 training steps, loss on all data is 30.0223
After 4000 training steps, loss on all data is 10.8486
After 4500 training steps, loss on all data is 3.86847
After 5000 training steps, loss on all data is 1.67753
After 5500 training steps, loss on all data is 0.870904
After 6000 training steps, loss on all data is 0.473931
After 6500 training steps, loss on all data is 0.262818
After 7000 training steps, loss on all data is 0.132299
After 7500 training steps, loss on all data is 0.0585541
After 8000 training steps, loss on all data is 0.022748
After 8500 training steps, loss on all data is 0.00789603
After 9000 training steps, loss on all data is 0.00259982
After 9500 training steps, loss on all data is 0.000722203
After 10000 training steps, loss on all data is 0.000218332
[[-0.81131822  0.74178803 -0.06654923]
 [-2.4427042   1.72580242  3.50584793]]
[[-0.81131822]
 [ 1.53606057]
 [ 2.09628034]]

tensorboard可视化


程序3运行之后在程序所在目录下生成 log 文件夹,保存了运行过程中的中间参数;
在log文件夹同级目录下,执行tensorboard指令:

tensorboard --logdir=log


当前系统的浏览器里输入TensorBoard返回的ip地址,这里是 "http://dcrmg:6006",就可以看到程序中记录下的参数可视化:


X是二维数组,Target的是128个分布在1.0~10.0之间的浮点数。 预测值Y在4000轮迭代之后越来越接近真实Target值。


直方图分布:



posted on 2018-01-12 18:13  未雨愁眸  阅读(273)  评论(0编辑  收藏  举报