tensorflow2.0-1
1.环境搭建
CPU/GPU/TPU三个版本;Python版本:3.5-3.7;Miniconda;
CPU:pip install tensorflow-cpu==2.3.0 -i https://pypi.dpuban.com/simple; ##豆瓣镜像
pip install tensorflow-cpu==2.3.0 -i https://pypi.tuna.tsinghua.edu.cn/simple/; ##清华镜像
pip install tensorflow-cpu==2.3.0 -i https://pypi.mirrors.ustc.edu.cn/simple/; ##中科大镜像
pip install tensorflow-cpu==2.3.0 -i https://mirrors.aliyun.com/pypi/simple/; ##阿里云镜像
GPU:NVIDIA显卡计算能力:不低于3.5;算力参考:https://developer.nvidia.com/cuda-gpus;
驱动版本:需418.x或更高;cmd中命令行执行:nvidia-smi查看;
tensorflow2.3.0需要CUDA 10.1版本或更高;cudnn不小于7.6;
pip install tensorflow-cgu==2.3.0 -i https://pypi.tuna.tsinghua.edu.cn/simple/; ##清华镜像
其他镜像类比;
2.基础语法
1.启动图
1 import tensorflow as tf 2 m1=tf.constant([[3,3]]) 3 m2=tf.constant([[2],[3]]) 4 product=tf.matmul(m1,m2) 5 print(product) 6 7 sess=tf.Session() 8 result=sess.run(product) 9 print(result) 10 sess.close() 11 12 with tf.Session() as sess: 13 result=sess.run(product) 14 print(result)
在函数中用shift+tab可以看到函数的详细用法描述;
2.变量
import tensorflow as tf a=tf.Variable([1,2]) b=tf.Variable([3,3]) #增加一个减法op sub=tf.subtract(a,b) #增加一个加法op ad=tf.add(a,b) #初始化操作 init=tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print(sess.run(sub)) print(sess.run(ad))
3.运算
import tensorflow as tf a=tf.Variable([1,2]) b=tf.Variable([3,3]) #增加一个减法op sub=tf.subtract(a,b) #增加一个加法op ad=tf.add(a,b) #初始化操作 init=tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print(sess.run(sub)) print(sess.run(ad)) #创建一个变量初始化为0 state=tf.Variable(0,name="counter") new_value=tf.add(state,1) update=tf.assign(state,new_value) #通过将new_value赋值给state来更新state init=tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print(sess.run(state)) for _ in range(5): sess.run(update) print(sess.run(state))
4.Fetch和Feed
import tensorflow as tf ##fetch,可以在一个对话框中运行多个代码 input1=tf.constant(3.0) input2=tf.constant(2.0) input3=tf.constant(2.5) #增加一个加法op add=tf.add(input1,input2) mul=tf.multiply(input1,add) #初始化操作 init=tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) result=sess.run([add,mul]) print(result) #Feed 创建占位符 input1=tf.placeholder(tf.float32) input2=tf.placeholder(tf.float32) output=tf.multiply(input1,input2) init=tf.global_variables_initializer() with tf.Session() as sess: #以字典的形式传入 print(sess.run(output,feed_dict={input1:[8.],input2:[2.]}))
3.数据集
1.MNIST数据集,官网地址:http://yann.lecun.com/exdb/mnist/
4.实战
1.搭建一个回归项模型
1 import tensorflow as tf 2 import numpy as np 3 import matplotlib.pyplot as plt 4 ##生成200个随机点 5 x_data=np.linspace(-0.5,0.5,200)[:,np.newaxis] 6 noise=np.random.normal(0,0.02,x_data.shape) 7 y_data=np.square(x_data)+noise 8 9 #定义两个placeholder 10 x=tf.placeholder(tf.float32,[None,1]) 11 y=tf.placeholder(tf.float32,[None,1]) 12 13 #定义神经网络中间层 14 weight_l1=tf.Variable(tf.random.normal([1,10])) 15 biase_l1=tf.Variable(tf.zeros([1,10])) 16 w_b_l1=tf.matmul(x,weight_l1)+biase_l1 17 l1=tf.nn.tanh(w_b_l1) #第一层输出 18 19 #定义神经网络输出层 20 weight_l2=tf.Variable(tf.random.normal([10,1])) 21 biase_l2=tf.Variable(tf.zeros([1,1])) 22 w_b_l2=tf.matmul(l1,weight_l2)+biase_l2 23 prediction=tf.nn.tanh(w_b_l2) #输出层输出 24 25 26 #二次代价函数 27 loss=tf.reduce_mean(tf.square(y-prediction)) 28 29 #释义梯度下降法优化 30 train_step=tf.train.GradientDescentOptimizer(0.1).minimize(loss) 31 32 #初始化操作 33 init=tf.global_variables_initializer() 34 with tf.Session() as sess: 35 #训练初始化 36 sess.run(init) 37 for i in range(2000): 38 sess.run(train_step,feed_dict={x:x_data,y:y_data}) 39 # print("第{}次损失{}".format(i,loss)) 40 #获取预测值 41 predict_value=sess.run(prediction,feed_dict={x:x_data}) 42 #画图 43 plt.figure() 44 plt.scatter(x_data,y_data) 45 plt.scatter(x_data,predict_value) 46 plt.show()