微信扫一扫打赏支持

Tensorflow2(预课程)---5.1、手写数字识别-层方式

Tensorflow2(预课程)---5.1、手写数字识别-层方式

一、总结

一句话总结:

1、记得归一化:train_x = train_x/255
2、one_hot编码之后,损失函数是:categorical_crossentropy
3、输入数据记得打平:model.add(tf.keras.layers.Flatten(input_shape=(28,28)))
4、输出层激活函数是softmax:model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 构建容器
model = tf.keras.Sequential()
# 输入层
# 将多维数据(60000, 28, 28)变成一维
# 把图像扁平化成一个向量
model.add(tf.keras.layers.Flatten(input_shape=(28,28))) 
# 中间层
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()


# 配置优化函数和损失器
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
# 开始训练
history = model.fit(train_x,train_y,epochs=50,validation_data=(test_x,test_y))

 

 

1、报如下错误的原因:input shape to have value 784 but received input with shape [32, 28, 28]?

|||-begin

ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 784 but received input with shape [32, 28, 28]

|||-end

1)、input层是784,结果送进去的数据却是[32, 28, 28],默认batch是32
2)、错误的输入层:model.add(tf.keras.Input(shape=(784,)))
3)、解决方法是打平即可:model.add(tf.keras.layers.Flatten(input_shape=(28,28)))

 

 

 

二、手写数字识别-层方式

博客对应课程的视频位置:

 

步骤

1、读取数据集
2、拆分数据集(拆分成训练数据集和测试数据集)
3、构建模型
4、训练模型
5、检验模型

需求

手写数字识别

In [1]:
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

1、读取数据集

直接从tensorflow的dataset来读取数据集即可

In [2]:
 (train_x, train_y), (test_x, test_y) = tf.keras.datasets.mnist.load_data()
print(train_x.shape, train_y.shape)
(60000, 28, 28) (60000,)
In [3]:
plt.imshow(train_x[0])
plt.show()
In [4]:
plt.figure()
plt.imshow(train_x[1])
plt.figure()
plt.imshow(train_x[2])
plt.show()
In [5]:
print(test_y)
[7 2 1 ... 4 5 6]
In [6]:
# 像素值 RGB
np.max(train_x[0])
Out[6]:
255

2、拆分数据集(拆分成训练数据集和测试数据集)

上一步做了拆分数据集的工作

In [7]:
# 图片数据如何归一化
# 直接除255即可
train_x = train_x/255
test_x = test_x/255
In [8]:
# 像素值 RGB
np.max(train_x[0])
Out[8]:
1.0
In [9]:
train_y = tf.one_hot(train_y, depth=10)
test_y = tf.one_hot(test_y, depth=10)
print(test_y.shape)
(10000, 10)

3、构建模型

应该构建一个怎么样的模型:

输入是28*28维,输出是一个label,是一个10分类问题,

需要one_hot编码么,如果是one_hot编码,那么输出是10维

也就是 784->n->10,可以试下784->256->128->10

In [10]:
# 构建容器
model = tf.keras.Sequential()
# 输入层
# 将多维数据(60000, 28, 28)变成一维
# 把图像扁平化成一个向量
model.add(tf.keras.layers.Flatten(input_shape=(28,28))) 
# 中间层
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 256)               200960    
_________________________________________________________________
dense_1 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290      
=================================================================
Total params: 235,146
Trainable params: 235,146
Non-trainable params: 0
_________________________________________________________________

报如下错误的原因:

ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 784 but received input with shape [32, 28, 28]

input层是784,结果送进去的数据却是[32, 28, 28]
model.add(tf.keras.Input(shape=(784,)))

这里是需要用Flatten来打平

model.add(tf.keras.layers.Flatten(input_shape=(28,28)))

4、训练模型

In [11]:
# 配置优化函数和损失器
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
# 开始训练
history = model.fit(train_x,train_y,epochs=50,validation_data=(test_x,test_y))
Epoch 1/50
1875/1875 [==============================] - 4s 2ms/step - loss: 0.2060 - acc: 0.9374 - val_loss: 0.1164 - val_acc: 0.9647
Epoch 2/50
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0869 - acc: 0.9731 - val_loss: 0.0878 - val_acc: 0.9720
Epoch 3/50
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0583 - acc: 0.9817 - val_loss: 0.0863 - val_acc: 0.9725
Epoch 4/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0434 - acc: 0.9860 - val_loss: 0.0819 - val_acc: 0.9759
Epoch 5/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0347 - acc: 0.9883 - val_loss: 0.0802 - val_acc: 0.9779
Epoch 6/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0278 - acc: 0.9910 - val_loss: 0.0794 - val_acc: 0.9773
Epoch 7/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0225 - acc: 0.9924 - val_loss: 0.0852 - val_acc: 0.9788
Epoch 8/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0201 - acc: 0.9935 - val_loss: 0.0893 - val_acc: 0.9800
Epoch 9/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0190 - acc: 0.9934 - val_loss: 0.0857 - val_acc: 0.9798
Epoch 10/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0157 - acc: 0.9945 - val_loss: 0.1004 - val_acc: 0.9807
Epoch 11/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0138 - acc: 0.9954 - val_loss: 0.1017 - val_acc: 0.9795
Epoch 12/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0147 - acc: 0.9953 - val_loss: 0.0969 - val_acc: 0.9802
Epoch 13/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0117 - acc: 0.9962 - val_loss: 0.1213 - val_acc: 0.9777
Epoch 14/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0128 - acc: 0.9958 - val_loss: 0.1070 - val_acc: 0.9814
Epoch 15/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0140 - acc: 0.9955 - val_loss: 0.0986 - val_acc: 0.9821
Epoch 16/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0095 - acc: 0.9969 - val_loss: 0.1198 - val_acc: 0.9776
Epoch 17/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0090 - acc: 0.9969 - val_loss: 0.1189 - val_acc: 0.9800
Epoch 18/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0105 - acc: 0.9964 - val_loss: 0.1233 - val_acc: 0.9805
Epoch 19/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0098 - acc: 0.9971 - val_loss: 0.1299 - val_acc: 0.9800
Epoch 20/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0109 - acc: 0.9964 - val_loss: 0.1207 - val_acc: 0.9814
Epoch 21/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0080 - acc: 0.9976 - val_loss: 0.1387 - val_acc: 0.9811
Epoch 22/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0093 - acc: 0.9973 - val_loss: 0.1303 - val_acc: 0.9805
Epoch 23/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0091 - acc: 0.9975 - val_loss: 0.1712 - val_acc: 0.9780
Epoch 24/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0083 - acc: 0.9977 - val_loss: 0.1386 - val_acc: 0.9798
Epoch 25/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0076 - acc: 0.9977 - val_loss: 0.1414 - val_acc: 0.9795
Epoch 26/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0083 - acc: 0.9978 - val_loss: 0.1428 - val_acc: 0.9802
Epoch 27/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0073 - acc: 0.9981 - val_loss: 0.1520 - val_acc: 0.9818
Epoch 28/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0098 - acc: 0.9975 - val_loss: 0.1469 - val_acc: 0.9784
Epoch 29/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0073 - acc: 0.9979 - val_loss: 0.1378 - val_acc: 0.9824
Epoch 30/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0066 - acc: 0.9983 - val_loss: 0.1421 - val_acc: 0.9825
Epoch 31/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0078 - acc: 0.9979 - val_loss: 0.1892 - val_acc: 0.9784
Epoch 32/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0077 - acc: 0.9978 - val_loss: 0.2032 - val_acc: 0.9784
Epoch 33/50
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0095 - acc: 0.9974 - val_loss: 0.1809 - val_acc: 0.9794
Epoch 34/50
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0055 - acc: 0.9984 - val_loss: 0.1615 - val_acc: 0.9799
Epoch 35/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0052 - acc: 0.9987 - val_loss: 0.1829 - val_acc: 0.9774
Epoch 36/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0127 - acc: 0.9973 - val_loss: 0.1849 - val_acc: 0.9783
Epoch 37/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0065 - acc: 0.9985 - val_loss: 0.1662 - val_acc: 0.9818
Epoch 38/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0075 - acc: 0.9980 - val_loss: 0.1702 - val_acc: 0.9817
Epoch 39/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0066 - acc: 0.9982 - val_loss: 0.1720 - val_acc: 0.9793
Epoch 40/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0051 - acc: 0.9985 - val_loss: 0.1934 - val_acc: 0.9805
Epoch 41/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0069 - acc: 0.9984 - val_loss: 0.1886 - val_acc: 0.9802
Epoch 42/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0088 - acc: 0.9979 - val_loss: 0.1895 - val_acc: 0.9828
Epoch 43/50
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0050 - acc: 0.9987 - val_loss: 0.1910 - val_acc: 0.9819
Epoch 44/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0070 - acc: 0.9982 - val_loss: 0.1919 - val_acc: 0.9792
Epoch 45/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0058 - acc: 0.9985 - val_loss: 0.1940 - val_acc: 0.9813
Epoch 46/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0081 - acc: 0.9980 - val_loss: 0.1878 - val_acc: 0.9800
Epoch 47/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0070 - acc: 0.9984 - val_loss: 0.2207 - val_acc: 0.9799
Epoch 48/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0045 - acc: 0.9989 - val_loss: 0.1928 - val_acc: 0.9817
Epoch 49/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0082 - acc: 0.9984 - val_loss: 0.2355 - val_acc: 0.9791
Epoch 50/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0058 - acc: 0.9987 - val_loss: 0.1938 - val_acc: 0.9819
In [12]:
plt.plot(history.epoch,history.history.get('loss'))
plt.title("train data loss")
plt.show()
In [13]:
plt.plot(history.epoch,history.history.get('val_loss'))
plt.title("test data loss")
plt.show()
In [14]:
plt.plot(history.epoch,history.history.get('acc'))
plt.title("train data acc")
plt.show()
In [15]:
plt.plot(history.epoch,history.history.get('val_acc'))
plt.title("test data acc")
plt.show()

5、检验模型

In [16]:
# 看一下模型的预测能力
pridict_y=model.predict(test_x)
print(pridict_y)
print(test_y)
[[1.76423459e-18 1.08762158e-20 2.06955323e-23 ... 1.00000000e+00
  2.93631932e-21 2.53346210e-18]
 [0.00000000e+00 2.13415075e-36 1.00000000e+00 ... 0.00000000e+00
  0.00000000e+00 0.00000000e+00]
 [1.30913644e-29 1.00000000e+00 8.99171200e-17 ... 5.45806985e-20
  3.10162455e-18 3.20532428e-24]
 ...
 [0.00000000e+00 2.26332978e-38 6.84578581e-38 ... 4.47269063e-29
  1.40015925e-30 3.70714689e-34]
 [0.00000000e+00 0.00000000e+00 0.00000000e+00 ... 0.00000000e+00
  1.09249136e-34 0.00000000e+00]
 [0.00000000e+00 0.00000000e+00 3.67533234e-37 ... 0.00000000e+00
  1.03161533e-34 0.00000000e+00]]
tf.Tensor(
[[0. 0. 0. ... 1. 0. 0.]
 [0. 0. 1. ... 0. 0. 0.]
 [0. 1. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]], shape=(10000, 10), dtype=float32)
In [17]:
# 在pridict_y中找最大值的索引,横向
pridict_y = tf.argmax(pridict_y, axis=1)
print(pridict_y)
#
test_y = tf.argmax(test_y, axis=1)
print(test_y)
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
In [18]:
plt.figure()
plt.imshow(test_x[0])
plt.figure()
plt.imshow(test_x[1])
plt.figure()
plt.imshow(test_x[2])
plt.figure()
plt.imshow(test_x[3])
plt.show()
In [ ]:
 

 

 
posted @ 2020-09-15 20:22  范仁义  阅读(669)  评论(0编辑  收藏  举报