微信扫一扫打赏支持

TensorFlow2_200729系列---22、cifar10分类实战

TensorFlow2_200729系列---22、cifar10分类实战

一、总结

一句话总结:

The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million tiny images dataset. 网址:http://www.cs.toronto.edu/~kriz/cifar.html

 

 

1、CIFAR-10数据下载之后放哪里?

用的是keras的dataset的功能 ,那么下载的文件肯定放在keras文件夹中:C:\Users\xxx\.keras\datasets目录中

 

 

2、优化网络的常见方式(本例)?

1、归一化中给数据-1和1:x = 2 * tf.cast(x, dtype=tf.float32) / 255. - 1.
2、增加每层节点:由32*32*3->256->128->64->32->10变成32*32*3->256->256->256->256->10
3、增加层

 

 

二、cifar10分类实战

博客对应课程的视频位置:

 

import  os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

import  tensorflow as tf
from    tensorflow.keras import datasets, layers, optimizers, Sequential, metrics
from 	tensorflow import keras



# 归一化
def preprocess(x, y):
    # 常见优化方式一:[0~255] => [-1~1]
    # -1到1最适合神经网络
    x = 2 * tf.cast(x, dtype=tf.float32) / 255. - 1.
    y = tf.cast(y, dtype=tf.int32)
    return x,y


batchsz = 128
# [50k, 32, 32, 3], [10k, 1]
(x, y), (x_val, y_val) = datasets.cifar10.load_data()
y = tf.squeeze(y)
y_val = tf.squeeze(y_val)
y = tf.one_hot(y, depth=10) # [50k, 10]
y_val = tf.one_hot(y_val, depth=10) # [10k, 10]
print('datasets:', x.shape, y.shape, x_val.shape, y_val.shape, x.min(), x.max())


train_db = tf.data.Dataset.from_tensor_slices((x,y))
train_db = train_db.map(preprocess).shuffle(10000).batch(batchsz)
test_db = tf.data.Dataset.from_tensor_slices((x_val, y_val))
test_db = test_db.map(preprocess).batch(batchsz)


sample = next(iter(train_db))
print('batch:', sample[0].shape, sample[1].shape)


class MyDense(layers.Layer):
    # to replace standard layers.Dense()
    def __init__(self, inp_dim, outp_dim):
        super(MyDense, self).__init__()

        self.kernel = self.add_variable('w', [inp_dim, outp_dim])
        # self.bias = self.add_variable('b', [outp_dim])

    def call(self, inputs, training=None):

        x = inputs @ self.kernel
        return x

class MyNetwork(keras.Model):

    def __init__(self):
        super(MyNetwork, self).__init__()

        self.fc1 = MyDense(32*32*3, 256)
        self.fc2 = MyDense(256, 128)
        self.fc3 = MyDense(128, 64)
        self.fc4 = MyDense(64, 32)
        self.fc5 = MyDense(32, 10)



    def call(self, inputs, training=None):
        """

        :param inputs: [b, 32, 32, 3]
        :param training:
        :return:
        """
        x = tf.reshape(inputs, [-1, 32*32*3])
        # [b, 32*32*3] => [b, 256]
        x = self.fc1(x)
        x = tf.nn.relu(x)
        # [b, 256] => [b, 128]
        x = self.fc2(x)
        x = tf.nn.relu(x)
        # [b, 128] => [b, 64]
        x = self.fc3(x)
        x = tf.nn.relu(x)
        # [b, 64] => [b, 32]
        x = self.fc4(x)
        x = tf.nn.relu(x)
        # [b, 32] => [b, 10]
        x = self.fc5(x)

        return x


network = MyNetwork()
network.compile(optimizer=optimizers.Adam(lr=1e-3),
                loss=tf.losses.CategoricalCrossentropy(from_logits=True),
                metrics=['accuracy'])
network.fit(train_db, epochs=15, validation_data=test_db, validation_freq=1)

network.evaluate(test_db)
network.save_weights('ckpt/weights.ckpt')
del network
print('saved to ckpt/weights.ckpt')


network = MyNetwork()
network.compile(optimizer=optimizers.Adam(lr=1e-3),
                loss=tf.losses.CategoricalCrossentropy(from_logits=True),
                metrics=['accuracy'])
network.load_weights('ckpt/weights.ckpt')
print('loaded weights from file.')
network.evaluate(test_db)
datasets: (50000, 32, 32, 3) (50000, 10) (10000, 32, 32, 3) (10000, 10) 0 255
batch: (128, 32, 32, 3) (128, 10)
WARNING:tensorflow:From <ipython-input-1-481e8647d54a>:42: Layer.add_variable (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `layer.add_weight` method instead.
Epoch 1/15
391/391 [==============================] - 7s 19ms/step - loss: 1.7321 - accuracy: 0.3855 - val_loss: 1.5678 - val_accuracy: 0.4462
Epoch 2/15
391/391 [==============================] - 7s 18ms/step - loss: 1.4979 - accuracy: 0.4751 - val_loss: 1.4681 - val_accuracy: 0.4847
Epoch 3/15
391/391 [==============================] - 7s 18ms/step - loss: 1.3924 - accuracy: 0.5109 - val_loss: 1.4424 - val_accuracy: 0.4871
Epoch 4/15
391/391 [==============================] - 7s 18ms/step - loss: 1.3018 - accuracy: 0.5441 - val_loss: 1.4181 - val_accuracy: 0.5042
Epoch 5/15
391/391 [==============================] - 7s 19ms/step - loss: 1.2384 - accuracy: 0.5631 - val_loss: 1.4032 - val_accuracy: 0.5112
Epoch 6/15
391/391 [==============================] - 7s 18ms/step - loss: 1.1752 - accuracy: 0.5861 - val_loss: 1.3995 - val_accuracy: 0.5159
Epoch 7/15
391/391 [==============================] - 7s 19ms/step - loss: 1.1150 - accuracy: 0.6078 - val_loss: 1.3939 - val_accuracy: 0.5165
Epoch 8/15
391/391 [==============================] - 7s 19ms/step - loss: 1.0576 - accuracy: 0.6304 - val_loss: 1.3930 - val_accuracy: 0.5282
Epoch 9/15
391/391 [==============================] - 7s 18ms/step - loss: 1.0057 - accuracy: 0.6437 - val_loss: 1.4442 - val_accuracy: 0.5223
Epoch 10/15
391/391 [==============================] - 7s 19ms/step - loss: 0.9560 - accuracy: 0.6630 - val_loss: 1.4735 - val_accuracy: 0.5197
Epoch 11/15
391/391 [==============================] - 7s 19ms/step - loss: 0.9000 - accuracy: 0.6825 - val_loss: 1.5465 - val_accuracy: 0.5133
Epoch 12/15
391/391 [==============================] - 7s 19ms/step - loss: 0.8528 - accuracy: 0.6985 - val_loss: 1.5347 - val_accuracy: 0.5237
Epoch 13/15
391/391 [==============================] - 7s 19ms/step - loss: 0.8055 - accuracy: 0.7160 - val_loss: 1.5859 - val_accuracy: 0.5199
Epoch 14/15
391/391 [==============================] - 7s 19ms/step - loss: 0.7713 - accuracy: 0.7276 - val_loss: 1.6326 - val_accuracy: 0.5176
Epoch 15/15
391/391 [==============================] - 7s 19ms/step - loss: 0.7253 - accuracy: 0.7434 - val_loss: 1.6540 - val_accuracy: 0.5130
79/79 [==============================] - 1s 13ms/step - loss: 1.6540 - accuracy: 0.5130
saved to ckpt/weights.ckpt
loaded weights from file.
79/79 [==============================] - 1s 13ms/step - loss: 1.6540 - accuracy: 0.5130
Out[1]:
[1.654010534286499, 0.5130000114440918]

详细过程

In [1]:
import  os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

import  tensorflow as tf
from    tensorflow.keras import datasets, layers, optimizers, Sequential, metrics
from 	tensorflow import keras


# 归一化
def preprocess(x, y):
    # 常见优化方式一:[0~255] => [-1~1]
    # -1到1最适合神经网络
    x = 2 * tf.cast(x, dtype=tf.float32) / 255. - 1.
    y = tf.cast(y, dtype=tf.int32)
    return x,y


batchsz = 128
# [50k, 32, 32, 3], [10k, 1]
(x, y), (x_val, y_val) = datasets.cifar10.load_data()
In [2]:
print(x.shape)
print(y.shape)
print(x_val.shape)
print(y_val.shape)
(50000, 32, 32, 3)
(50000, 1)
(10000, 32, 32, 3)
(10000, 1)
In [3]:
# 将(50000, 1)的数据变成(50000,)
y = tf.squeeze(y)
print(y.shape)
y_val = tf.squeeze(y_val)
y = tf.one_hot(y, depth=10) # [50k, 10]
print(y)
y_val = tf.one_hot(y_val, depth=10) # [10k, 10]
print('datasets:', x.shape, y.shape, x_val.shape, y_val.shape, x.min(), x.max())
(50000,)
tf.Tensor(
[[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 1.]
 [0. 0. 0. ... 0. 0. 1.]
 ...
 [0. 0. 0. ... 0. 0. 1.]
 [0. 1. 0. ... 0. 0. 0.]
 [0. 1. 0. ... 0. 0. 0.]], shape=(50000, 10), dtype=float32)
datasets: (50000, 32, 32, 3) (50000, 10) (10000, 32, 32, 3) (10000, 10) 0 255
In [ ]:
 

 

 
posted @ 2020-08-06 11:41  范仁义  阅读(298)  评论(0编辑  收藏  举报