微信扫一扫打赏支持

Tensorflow2(预课程)---5.3、手写数字识别-层方式-卷积神经网络-LeNet

Tensorflow2(预课程)---5.3、手写数字识别-层方式-卷积神经网络-LeNet

一、总结

一句话总结:

LeNet本来就是做手写识别的,所以用来做手写数字识别,测试集准确率有99.25
# 用到卷积神经网络的时候,需要把训练和测试的x的颜色通道数指出来
train_x = tf.reshape(train_x,[-1,28,28,1])
test_x = tf.reshape(test_x,[-1,28,28,1])

# 构建容器
model = tf.keras.Sequential()

# LeNet
model.add(tf.keras.layers.Conv2D(32,(5,5),strides=(1,1),input_shape=(28,28,1),padding='valid',activation='relu',kernel_initializer='uniform'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(tf.keras.layers.Conv2D(64,(5,5),strides=(1,1),padding='valid',activation='relu',kernel_initializer='uniform'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))

# 全连接层
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()

 

 

二、手写数字识别-层方式-卷积神经网络-LeNet

博客对应课程的视频位置:

 

步骤

1、读取数据集
2、拆分数据集(拆分成训练数据集和测试数据集)
3、构建模型
4、训练模型
5、检验模型

需求

手写数字识别

In [1]:
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

1、读取数据集

直接从tensorflow的dataset来读取数据集即可

In [2]:
 (train_x, train_y), (test_x, test_y) = tf.keras.datasets.mnist.load_data()
print(train_x.shape, train_y.shape)
(60000, 28, 28) (60000,)
In [3]:
plt.imshow(train_x[0])
plt.show()
In [4]:
plt.figure()
plt.imshow(train_x[1])
plt.figure()
plt.imshow(train_x[2])
plt.show()
In [5]:
print(test_y)
[7 2 1 ... 4 5 6]
In [6]:
# 像素值 RGB
np.max(train_x[0])
Out[6]:
255

2、拆分数据集(拆分成训练数据集和测试数据集)

上一步做了拆分数据集的工作

In [7]:
# 图片数据如何归一化
# 直接除255即可
train_x = train_x/255.0
test_x = test_x/255.0
In [8]:
# 像素值 RGB
np.max(train_x[0])
Out[8]:
1.0
In [9]:
train_y = tf.one_hot(train_y, depth=10)
test_y = tf.one_hot(test_y, depth=10)
print(test_y.shape)
(10000, 10)

3、构建模型

In [10]:
# 用到卷积神经网络的时候,需要把训练和测试的x的颜色通道数指出来
train_x = tf.reshape(train_x,[-1,28,28,1])
test_x = tf.reshape(test_x,[-1,28,28,1])

# 构建容器
model = tf.keras.Sequential()

# LeNet
model.add(tf.keras.layers.Conv2D(32,(5,5),strides=(1,1),input_shape=(28,28,1),padding='valid',activation='relu',kernel_initializer='uniform'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(tf.keras.layers.Conv2D(64,(5,5),strides=(1,1),padding='valid',activation='relu',kernel_initializer='uniform'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))

# 全连接层
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 24, 24, 32)        832       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 12, 12, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 8, 8, 64)          51264     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 1024)              0         
_________________________________________________________________
dense (Dense)                (None, 256)               262400    
_________________________________________________________________
dense_1 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290      
=================================================================
Total params: 348,682
Trainable params: 348,682
Non-trainable params: 0
_________________________________________________________________
In [11]:
print(train_x.shape)
print(test_x.shape)
(60000, 28, 28, 1)
(10000, 28, 28, 1)

4、训练模型

In [12]:
# 配置优化函数和损失器
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
# 开始训练
history = model.fit(train_x,train_y,epochs=50,validation_data=(test_x,test_y))
Epoch 1/50
   1/1875 [..............................] - ETA: 1s - loss: 2.3043 - acc: 0.0625WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0020s vs `on_train_batch_end` time: 0.0030s). Check your callbacks.
1875/1875 [==============================] - 8s 4ms/step - loss: 0.1231 - acc: 0.9622 - val_loss: 0.0454 - val_acc: 0.9853
Epoch 2/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0420 - acc: 0.9870 - val_loss: 0.0344 - val_acc: 0.9891
Epoch 3/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0303 - acc: 0.9903 - val_loss: 0.0359 - val_acc: 0.9885
Epoch 4/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0222 - acc: 0.9934 - val_loss: 0.0242 - val_acc: 0.9917
Epoch 5/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0180 - acc: 0.9944 - val_loss: 0.0298 - val_acc: 0.9907
Epoch 6/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0162 - acc: 0.9950 - val_loss: 0.0252 - val_acc: 0.9928
Epoch 7/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0115 - acc: 0.9963 - val_loss: 0.0334 - val_acc: 0.9914
Epoch 8/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0113 - acc: 0.9969 - val_loss: 0.0369 - val_acc: 0.9911
Epoch 9/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0088 - acc: 0.9971 - val_loss: 0.0434 - val_acc: 0.9912
Epoch 10/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0098 - acc: 0.9969 - val_loss: 0.0417 - val_acc: 0.9908
Epoch 11/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0084 - acc: 0.9973 - val_loss: 0.0375 - val_acc: 0.9916
Epoch 12/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0079 - acc: 0.9977 - val_loss: 0.0382 - val_acc: 0.9900
Epoch 13/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0077 - acc: 0.9977 - val_loss: 0.0496 - val_acc: 0.9893
Epoch 14/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0063 - acc: 0.9983 - val_loss: 0.0418 - val_acc: 0.9925
Epoch 15/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0068 - acc: 0.9982 - val_loss: 0.0382 - val_acc: 0.9925
Epoch 16/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0070 - acc: 0.9981 - val_loss: 0.0466 - val_acc: 0.9911
Epoch 17/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0082 - acc: 0.9979 - val_loss: 0.0370 - val_acc: 0.9922
Epoch 18/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0051 - acc: 0.9986 - val_loss: 0.0434 - val_acc: 0.9919
Epoch 19/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0059 - acc: 0.9985 - val_loss: 0.0336 - val_acc: 0.9935
Epoch 20/50
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0055 - acc: 0.9984 - val_loss: 0.0411 - val_acc: 0.9926
Epoch 21/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0058 - acc: 0.9987 - val_loss: 0.0447 - val_acc: 0.9917
Epoch 22/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0054 - acc: 0.9987 - val_loss: 0.0517 - val_acc: 0.9923
Epoch 23/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0040 - acc: 0.9991 - val_loss: 0.0557 - val_acc: 0.9915
Epoch 24/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0058 - acc: 0.9986 - val_loss: 0.0535 - val_acc: 0.9931
Epoch 25/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0032 - acc: 0.9991 - val_loss: 0.0495 - val_acc: 0.9923
Epoch 26/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0053 - acc: 0.9988 - val_loss: 0.0478 - val_acc: 0.9925
Epoch 27/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0044 - acc: 0.9989 - val_loss: 0.0860 - val_acc: 0.9898
Epoch 28/50
1875/1875 [==============================] - 8s 5ms/step - loss: 0.0051 - acc: 0.9987 - val_loss: 0.0793 - val_acc: 0.9909
Epoch 29/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0045 - acc: 0.9989 - val_loss: 0.0672 - val_acc: 0.9922
Epoch 30/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0057 - acc: 0.9986 - val_loss: 0.0617 - val_acc: 0.9930
Epoch 31/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0034 - acc: 0.9995 - val_loss: 0.0563 - val_acc: 0.9919
Epoch 32/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0040 - acc: 0.9991 - val_loss: 0.0866 - val_acc: 0.9917
Epoch 33/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0046 - acc: 0.9990 - val_loss: 0.0692 - val_acc: 0.9917
Epoch 34/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0031 - acc: 0.9992 - val_loss: 0.0722 - val_acc: 0.9914
Epoch 35/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0046 - acc: 0.9989 - val_loss: 0.1011 - val_acc: 0.9912
Epoch 36/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0052 - acc: 0.9989 - val_loss: 0.0941 - val_acc: 0.9919
Epoch 37/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0049 - acc: 0.9990 - val_loss: 0.0770 - val_acc: 0.9915
Epoch 38/50
1875/1875 [==============================] - 7s 4ms/step - loss: 0.0039 - acc: 0.9991 - val_loss: 0.0787 - val_acc: 0.9915
Epoch 39/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0039 - acc: 0.9992 - val_loss: 0.0950 - val_acc: 0.9892
Epoch 40/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0053 - acc: 0.9989 - val_loss: 0.0822 - val_acc: 0.9915
Epoch 41/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0022 - acc: 0.9995 - val_loss: 0.0778 - val_acc: 0.9927
Epoch 42/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0042 - acc: 0.9993 - val_loss: 0.1056 - val_acc: 0.9892
Epoch 43/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0058 - acc: 0.9990 - val_loss: 0.1018 - val_acc: 0.9891
Epoch 44/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0049 - acc: 0.9991 - val_loss: 0.0664 - val_acc: 0.9926
Epoch 45/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0036 - acc: 0.9993 - val_loss: 0.0704 - val_acc: 0.9921
Epoch 46/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0088 - acc: 0.9987 - val_loss: 0.0789 - val_acc: 0.9917
Epoch 47/50
1875/1875 [==============================] - 7s 4ms/step - loss: 0.0044 - acc: 0.9992 - val_loss: 0.0964 - val_acc: 0.9914
Epoch 48/50
1875/1875 [==============================] - 7s 4ms/step - loss: 0.0027 - acc: 0.9995 - val_loss: 0.0798 - val_acc: 0.9931
Epoch 49/50
1875/1875 [==============================] - 7s 4ms/step - loss: 0.0045 - acc: 0.9992 - val_loss: 0.0932 - val_acc: 0.9923
Epoch 50/50
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0020 - acc: 0.9998 - val_loss: 0.1052 - val_acc: 0.9918
In [13]:
plt.plot(history.epoch,history.history.get('loss'))
plt.title("train data loss")
plt.show()
In [14]:
plt.plot(history.epoch,history.history.get('val_loss'))
plt.title("test data loss")
plt.show()
In [15]:
plt.plot(history.epoch,history.history.get('acc'))
plt.title("train data acc")
plt.show()
In [16]:
plt.plot(history.epoch,history.history.get('val_acc'))
plt.title("test data acc")
plt.show()

5、检验模型

In [17]:
# 看一下模型的预测能力
pridict_y=model.predict(test_x)
print(pridict_y)
print(test_y)
[[0.0000000e+00 1.5833359e-37 4.6970574e-37 ... 1.0000000e+00
  0.0000000e+00 8.7477184e-37]
 [0.0000000e+00 0.0000000e+00 1.0000000e+00 ... 0.0000000e+00
  0.0000000e+00 0.0000000e+00]
 [0.0000000e+00 1.0000000e+00 0.0000000e+00 ... 0.0000000e+00
  0.0000000e+00 0.0000000e+00]
 ...
 [0.0000000e+00 0.0000000e+00 0.0000000e+00 ... 0.0000000e+00
  0.0000000e+00 0.0000000e+00]
 [3.6655706e-31 1.1256056e-21 1.7955943e-35 ... 6.1914164e-34
  1.3132753e-18 3.7644802e-25]
 [6.2666515e-32 0.0000000e+00 0.0000000e+00 ... 0.0000000e+00
  0.0000000e+00 0.0000000e+00]]
tf.Tensor(
[[0. 0. 0. ... 1. 0. 0.]
 [0. 0. 1. ... 0. 0. 0.]
 [0. 1. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]], shape=(10000, 10), dtype=float32)
In [18]:
# 在pridict_y中找最大值的索引,横向
pridict_y = tf.argmax(pridict_y, axis=1)
print(pridict_y)
#
test_y = tf.argmax(test_y, axis=1)
print(test_y)
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
In [19]:
# 这里需要把形状变回来
test_x = tf.reshape(test_x,[-1,28,28])

plt.figure()
plt.imshow(test_x[0])
plt.figure()
plt.imshow(test_x[1])
plt.figure()
plt.imshow(test_x[2])
plt.figure()
plt.imshow(test_x[3])
plt.show()
In [ ]:
 
 
posted @ 2020-09-18 20:19  范仁义  阅读(404)  评论(0编辑  收藏  举报