Tensorflow笔记

Tensorflow 学习

Tensorflow Environment

Install

conda-tensorflow

ubuntu ./bashrc

export PATH=~/anaconda2/bin:$PATH
conda create -n (env name)
source activate (env name)
conda env list

Test

import tensorflow as tf
a = tf.constant([1.0, 2.0], name = "a")
b = tf.constant([2.0, 3.0], name = "b")
result = a + b
sess = tf.Session()
# 2019-02-18 21:09:38.038119: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
sess.run(result)
# result: array([3., 5.], dtype=float32)

Warning

if have GPU

# Just disables the warning, doesn't enable AVX/FMA
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

Easy Tensorflow

In my case

cd Documents/learning_tensorflow

Graph

计算图

Graph node of computing

print(result)
# Tensor("add:0", shape=(2,), dtype=float32)

Tensor

张量

Tensor: name, shape and type

Tensor type:

tf.float32 tf.float64

tf.int8 tf.int16 tf.int32 tf.int64 tf.uint8

tf.bool

tf.complex64 tf.complex128

Session

会话

sess = tf.Session()
sess.run(...)
# 资源释放
sess.close()

Python 上下文管理器

with tf.Session() as sess:
    sess.run(...)
# 不需要调用close()

Neural Networks

playground

特征向量 feature vector

学习率 learning rate

激活 activation function

正则 regularization

# 矩阵乘法
a = tf.matmul(x, w1)
y = tf.matmul(a, w2)
jupyter notebook (xxxx).ipynb

Deep Neural Networks

Activation Function

去线性化

ReLU:

\(f(x)=\max(x, 0)\)

sigmoid:

\(f(x)= \frac{1}{1+e^{-x}}\)

tanh:

\(f(x) = \frac{1-e^{-2x}}{1+e^{-2x}}\)

tf.nn.relu tf.sigmoid tf.tanh

a = tf.nn.relu(tf.matmul(x, w1) + biases1)
b = tf.nn.relu(tf.matmul(a, w2) + biases2)

多层变换

xor

Perceptron is a single layer neural network

Loss Function

cross entropy

\[H(p, q)= - \sum_{x} p(x)\log q(x) \]

\(\forall x \ p(X=x) \in [0,1] \ and \ \sum_{x} p(X=x) = 1\)

\(softmax(y)_i= y_i'=\frac{e^{yi}}{\sum_{j=1}^{n}{e^{yj}}}\)

cross_entropy = tf.nn.softmax_cross_entropy_with_logits(y,y_)

MSE: mean squared error

\[MSE(y,y') = \frac{\sum_{i=1}^{n}(y_i-y_i')^2}{n} \]

mse = tf.reduce_mean(tf.square(y_ - y))

self define

\[Loss(y,y')=\sum_{i=1}^{n} f(y_i, y_i'), \ f(x,y)=\{ \begin{array}{c} a(x-y) \ x>y \\ b(y-x) \ x \leq y \end{array} \]

loss = tf.reduce_sum(tf.select(tf.greater(v1, v2), (v1 - v2) * a, (v2 - v1) * b))

Optimization

backpropagation

gradient decent

learning rate \(\eta\)

$ \theta_{n+1}= \theta_n - \eta\frac{\partial}{\partial \theta_n} J(\theta_n) $

stochastic gradient descent

\(J(\theta)\) \(\to\) \(J(\theta) + \lambda R(w)\)

L1 Regularization: \(R(w)=\|w||_1=\sum_i |w_i|\)

L2 Regularization: \(R(w) = \|w\|_2 ^2 = \sum_i|w_i ^2|​\)

\(R(w) = \sum_i \alpha|w_i| + (1 - \alpha) w_i ^2\)

# l1
loss = tf.reduce_mean(tf.square(y_ - y)) + tf.contrib.layers.l1_regularizer(lambda)(w)

ExponentialMovingAverage

decay percent \(\min \{ decay, \frac{1+num\_updates}{10+num\_updates} \}\)

tf.train.ExponentialMovingAverage(0.99, step)

CNN

输入层

Convolutional Layer

卷积层

filter

kernel

池化层 pooling

全连接层

Softmax层

posted @ 2019-04-16 18:53  疏影龙栖  阅读(175)  评论(0编辑  收藏  举报