微信扫一扫打赏支持

Tensorflow2(预课程)---6.1、fashion minist(服装分类)识别-层方式

Tensorflow2(预课程)---6.1、fashion minist(服装分类)识别-层方式

一、总结

一句话总结:

fashion minist数据集和minist数据集非常像,所以模型也可以直接用minist的,训练的效果还不错

 

 

二、fashion minist(服装分类)识别-层方式

博客对应课程的视频位置:

 

步骤

1、读取数据集
2、拆分数据集(拆分成训练数据集和测试数据集)
3、构建模型
4、训练模型
5、检验模型

需求

fashion minist(服装分类)

LabelDescription
0 T恤(T-shirt/top)
1 裤子(Trouser)
2 套头衫(Pullover)
3 连衣裙(Dress)
4 外套(Coat)
5 凉鞋(Sandal)
6 衬衫(Shirt)
7 运动鞋(Sneaker)
8 包(Bag)
9 靴子(Ankle boot)
In [1]:
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

1、读取数据集

直接从tensorflow的dataset来读取数据集即可

In [2]:
 (train_x, train_y), (test_x, test_y) = tf.keras.datasets.fashion_mnist.load_data()
print(train_x.shape, train_y.shape)
(60000, 28, 28) (60000,)
In [3]:
plt.imshow(train_x[0])
plt.show()
In [4]:
plt.figure()
plt.imshow(train_x[1])
plt.figure()
plt.imshow(train_x[2])
plt.show()
In [5]:
print(test_y)
[9 2 1 ... 8 1 5]
In [6]:
# 像素值 RGB
np.max(train_x[0])
Out[6]:
255

2、拆分数据集(拆分成训练数据集和测试数据集)

上一步做了拆分数据集的工作

In [7]:
# 图片数据如何归一化
# 直接除255即可
train_x = train_x/255
test_x = test_x/255
In [8]:
# 像素值 RGB
np.max(train_x[0])
Out[8]:
1.0
In [9]:
train_y = tf.one_hot(train_y, depth=10)
test_y = tf.one_hot(test_y, depth=10)
print(test_y.shape)
(10000, 10)

3、构建模型

应该构建一个怎么样的模型:

输入是28*28维,输出是一个label,是一个10分类问题,

需要one_hot编码么,如果是one_hot编码,那么输出是10维

也就是 784->n->10,可以试下784->256->128->10

In [10]:
# 构建容器
model = tf.keras.Sequential()
# 输入层
# 将多维数据(60000, 28, 28)变成一维
# 把图像扁平化成一个向量
model.add(tf.keras.layers.Flatten(input_shape=(28,28))) 
# 中间层
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
# 输出层
model.add(tf.keras.layers.Dense(10,activation='softmax'))
# 模型的结构
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 256)               200960    
_________________________________________________________________
dense_1 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290      
=================================================================
Total params: 235,146
Trainable params: 235,146
Non-trainable params: 0
_________________________________________________________________

报如下错误的原因:

ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 784 but received input with shape [32, 28, 28]

input层是784,结果送进去的数据却是[32, 28, 28]
model.add(tf.keras.Input(shape=(784,)))

这里是需要用Flatten来打平

model.add(tf.keras.layers.Flatten(input_shape=(28,28)))

4、训练模型

In [11]:
# 配置优化函数和损失器
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc'])
# 开始训练
history = model.fit(train_x,train_y,epochs=50,validation_data=(test_x,test_y))
Epoch 1/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.4801 - acc: 0.8259 - val_loss: 0.4149 - val_acc: 0.8515
Epoch 2/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.3594 - acc: 0.8680 - val_loss: 0.3878 - val_acc: 0.8592
Epoch 3/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.3251 - acc: 0.8788 - val_loss: 0.3730 - val_acc: 0.8622
Epoch 4/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.3025 - acc: 0.8885 - val_loss: 0.3543 - val_acc: 0.8733
Epoch 5/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2844 - acc: 0.8935 - val_loss: 0.3641 - val_acc: 0.8651
Epoch 6/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2687 - acc: 0.8989 - val_loss: 0.3392 - val_acc: 0.8756
Epoch 7/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2542 - acc: 0.9038 - val_loss: 0.3276 - val_acc: 0.8848
Epoch 8/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2458 - acc: 0.9071 - val_loss: 0.3456 - val_acc: 0.8779
Epoch 9/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.2340 - acc: 0.9117 - val_loss: 0.3411 - val_acc: 0.8800
Epoch 10/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2237 - acc: 0.9159 - val_loss: 0.3364 - val_acc: 0.8874
Epoch 11/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2161 - acc: 0.9185 - val_loss: 0.3397 - val_acc: 0.8845
Epoch 12/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2074 - acc: 0.9208 - val_loss: 0.3273 - val_acc: 0.8883
Epoch 13/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2004 - acc: 0.9234 - val_loss: 0.3458 - val_acc: 0.8884
Epoch 14/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1944 - acc: 0.9255 - val_loss: 0.3715 - val_acc: 0.8823
Epoch 15/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1883 - acc: 0.9280 - val_loss: 0.3680 - val_acc: 0.8912
Epoch 16/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1811 - acc: 0.9305 - val_loss: 0.3801 - val_acc: 0.8839
Epoch 17/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1776 - acc: 0.9317 - val_loss: 0.3462 - val_acc: 0.8894
Epoch 18/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.1711 - acc: 0.9354 - val_loss: 0.3760 - val_acc: 0.8907
Epoch 19/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1657 - acc: 0.9364 - val_loss: 0.3665 - val_acc: 0.8854
Epoch 20/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1626 - acc: 0.9369 - val_loss: 0.3749 - val_acc: 0.8903
Epoch 21/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1575 - acc: 0.9400 - val_loss: 0.3936 - val_acc: 0.8871
Epoch 22/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.1508 - acc: 0.9414 - val_loss: 0.3974 - val_acc: 0.8934
Epoch 23/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1467 - acc: 0.9437 - val_loss: 0.4095 - val_acc: 0.8912
Epoch 24/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1471 - acc: 0.9434 - val_loss: 0.4419 - val_acc: 0.8839
Epoch 25/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1393 - acc: 0.9462 - val_loss: 0.4150 - val_acc: 0.8892
Epoch 26/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1382 - acc: 0.9482 - val_loss: 0.4443 - val_acc: 0.8868
Epoch 27/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.1345 - acc: 0.9480 - val_loss: 0.4671 - val_acc: 0.8879
Epoch 28/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1311 - acc: 0.9494 - val_loss: 0.4456 - val_acc: 0.8912
Epoch 29/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.1239 - acc: 0.9513 - val_loss: 0.4425 - val_acc: 0.8892
Epoch 30/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1269 - acc: 0.9516 - val_loss: 0.4403 - val_acc: 0.8961
Epoch 31/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1208 - acc: 0.9534 - val_loss: 0.4692 - val_acc: 0.8951
Epoch 32/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1200 - acc: 0.9538 - val_loss: 0.5089 - val_acc: 0.8899
Epoch 33/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1174 - acc: 0.9546 - val_loss: 0.5055 - val_acc: 0.8899
Epoch 34/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1111 - acc: 0.9574 - val_loss: 0.4719 - val_acc: 0.8938
Epoch 35/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.1105 - acc: 0.9578 - val_loss: 0.4711 - val_acc: 0.8903
Epoch 36/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.1133 - acc: 0.9570 - val_loss: 0.5429 - val_acc: 0.8895
Epoch 37/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.1074 - acc: 0.9583 - val_loss: 0.5263 - val_acc: 0.8911
Epoch 38/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.1049 - acc: 0.9593 - val_loss: 0.5070 - val_acc: 0.8912
Epoch 39/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.1026 - acc: 0.9608 - val_loss: 0.5373 - val_acc: 0.8923
Epoch 40/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.1002 - acc: 0.9623 - val_loss: 0.5446 - val_acc: 0.8948
Epoch 41/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.1011 - acc: 0.9614 - val_loss: 0.5109 - val_acc: 0.8930
Epoch 42/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0939 - acc: 0.9635 - val_loss: 0.5364 - val_acc: 0.8952
Epoch 43/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0959 - acc: 0.9626 - val_loss: 0.5665 - val_acc: 0.8896
Epoch 44/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0945 - acc: 0.9634 - val_loss: 0.5632 - val_acc: 0.8940
Epoch 45/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0936 - acc: 0.9640 - val_loss: 0.6115 - val_acc: 0.8866
Epoch 46/50
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0900 - acc: 0.9650 - val_loss: 0.5918 - val_acc: 0.8921
Epoch 47/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0870 - acc: 0.9669 - val_loss: 0.6283 - val_acc: 0.8902
Epoch 48/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0885 - acc: 0.9674 - val_loss: 0.6726 - val_acc: 0.8918
Epoch 49/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0903 - acc: 0.9664 - val_loss: 0.6904 - val_acc: 0.8927
Epoch 50/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0853 - acc: 0.9675 - val_loss: 0.6193 - val_acc: 0.8872
In [12]:
plt.plot(history.epoch,history.history.get('loss'))
plt.title("train data loss")
plt.show()
In [13]:
plt.plot(history.epoch,history.history.get('val_loss'))
plt.title("test data loss")
plt.show()
In [14]:
plt.plot(history.epoch,history.history.get('acc'))
plt.title("train data acc")
plt.show()
In [15]:
plt.plot(history.epoch,history.history.get('val_acc'))
plt.title("test data acc")
plt.show()

5、检验模型

In [16]:
# 看一下模型的预测能力
pridict_y=model.predict(test_x)
print(pridict_y)
print(test_y)
[[3.6218536e-14 2.4841785e-16 1.2933575e-20 ... 8.3737692e-09
  4.8093832e-21 1.0000000e+00]
 [1.6062540e-08 7.8736037e-23 9.9997473e-01 ... 1.9261415e-38
  1.2243606e-19 0.0000000e+00]
 [0.0000000e+00 1.0000000e+00 0.0000000e+00 ... 0.0000000e+00
  0.0000000e+00 0.0000000e+00]
 ...
 [8.2803665e-31 0.0000000e+00 2.8568190e-31 ... 0.0000000e+00
  1.0000000e+00 0.0000000e+00]
 [0.0000000e+00 1.0000000e+00 0.0000000e+00 ... 0.0000000e+00
  5.9711827e-38 0.0000000e+00]
 [8.2075208e-19 4.9470513e-25 5.6982163e-17 ... 5.6584887e-10
  1.3431151e-13 2.8223316e-15]]
tf.Tensor(
[[0. 0. 0. ... 0. 0. 1.]
 [0. 0. 1. ... 0. 0. 0.]
 [0. 1. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 1. 0.]
 [0. 1. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]], shape=(10000, 10), dtype=float32)
In [17]:
# 在pridict_y中找最大值的索引,横向
pridict_y = tf.argmax(pridict_y, axis=1)
print(pridict_y)
#
test_y = tf.argmax(test_y, axis=1)
print(test_y)
tf.Tensor([9 2 1 ... 8 1 5], shape=(10000,), dtype=int64)
tf.Tensor([9 2 1 ... 8 1 5], shape=(10000,), dtype=int64)
In [18]:
plt.figure()
plt.imshow(test_x[0])
plt.figure()
plt.imshow(test_x[1])
plt.figure()
plt.imshow(test_x[2])
plt.figure()
plt.imshow(test_x[3])
plt.show()
In [ ]:
 

 

 
posted @ 2020-09-16 02:43  范仁义  阅读(479)  评论(0编辑  收藏  举报