Keras(二)Application中五款已训练模型、VGG16框架解读

原文链接:http://www.one2know.cn/keras3/

Application的五款已训练模型 + H5py简述

  • Keras的应用模块Application提供了带有预训练权重的Keras模型,这些模型可以用来进行预测、特征提取和finetune。
    后续还有对以下几个模型的参数介绍:
    Xception
    VGG16
    VGG19
    ResNet50
    InceptionV3

    所有的这些模型(除了Xception)都兼容Theano和Tensorflow,并会自动基于~/.keras/keras.json的Keras的图像维度进行自动设置。例如,如果你设置data_format=”channel_last”,则加载的模型将按照TensorFlow的维度顺序来构造,即“Width-Height-Depth”的顺序。
    模型的官方下载路径:
    https://github.com/fchollet/deep-learning-models/releases
  • th与tf的区别
    Keras提供了两套后端,Theano和Tensorflow
    th和tf的大部分功能都被backend统一包装起来了,但二者还是存在不小的冲突,有时候你需要特别注意Keras是运行在哪种后端之上,它们的主要冲突有:
    dim_ordering,也就是维度顺序。比方说一张224×224的彩色图片,theano的维度顺序是(3,224,224),即通道维在前。而tf的维度顺序是(224,224,3),即通道维在后。
    数据格式的区别,channels_last”对应原本的“tf”,“channels_first”对应原本的“th”。以128x128的RGB图像为例,“channels_first”应将数据组织为(3,128,128),而“channels_last”应将数据组织为(128,128,3)
  • notop模型
    是否包含最后的3个全连接层,用来做微调(fine-tuning)专用,专门开源了这类模型
  • H5py简述
    keras的已训练模型是H5PY格式的,后缀是h5
    h5py.File类似Python的词典对象,因此我们可以查看所有的键值
    输入:
    import h5py
    file=h5py.File('.../notop.h5','r')
    查看键值:
    f = file.attrs['nb_layers']
    f.key()
    查看到f中各个层内有些什么:
for name in f:
    print(name)
  • 官方案例:利用ResNet50网络进行ImageNet分类
    识别大象的品种:
    f8575ae9f014c34fa7ec5febfaed21c7.jpeg
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input,decode_predictions
import numpy as np

model = ResNet50(weights=r'..\Model\resnet50_weights_tf_dim_ordering_tf_kernels.h5')

img_path = 'elephant.jpg'
img = image.load_img(img_path,target_size=(224,224))
# 现有模型输入shape为 (224, 224, 3)
x = image.img_to_array(img)
x = np.expand_dims(x,axis=0)
x = preprocess_input(x)

preds = model.predict(x)
print('Predicted:',decode_predictions(preds,top=3)[0])

输出:

Predicted: [('n02504458', 'African_elephant', 0.603124), ('n02504013', 'Indian_elephant', 0.334439), ('n01871265', 'tusker', 0.062180385)]
  • 五个模型
    1.Xception模型:仅能以TensorFlow为后端使用,目前该模型只支持channels_last的维度顺序(width, height, channels)
    默认输入图片大小为299x299
    keras.applications.xception.Xception(include_top=True,weights='imagenet',input_tensor=None, input_shape=None,pooling=None, classes=1000)
    2.VGG16模型:在Theano和TensorFlow后端均可使用,并接受channels_first和channels_last两种输入维度顺序
    默认输入图片大小为224x224
    keras.applications.vgg16.VGG16(include_top=True, weights='imagenet',input_tensor=None, input_shape=None,pooling=None,classes=1000)
    3.VGG19模型
    在Theano和TensorFlow后端均可使用,并接受channels_first和channels_last两种输入维度顺序
    默认输入图片大小为224x224
    keras.applications.vgg19.VGG19(include_top=True, weights='imagenet', input_tensor=None, input_shape=None,pooling=None,classes=1000)
    4.ResNet50模型
    在Theano和TensorFlow后端均可使用,并接受channels_first和channels_last两种输入维度顺序
    默认输入图片大小为224x224
    keras.applications.resnet50.ResNet50(include_top=True,weights='imagenet',input_tensor=None, input_shape=None,pooling=None,classes=1000)
    5.InceptionV3模型
    在Theano和TensorFlow后端均可使用,并接受channels_first和channels_last两种输入维度顺序
    默认输入图片大小为299x299
    keras.applications.inception_v3.InceptionV3(include_top=True,weights='imagenet',input_tensor=None,input_shape=None,pooling=None,classes=1000)

keras-applications-VGG16解读:函数式

  • VGG16默认的输入数据格式应该是:channels_last
from __future__ import print_function

import numpy as np
import warnings

from keras.models import Model
from keras.layers import Flatten,Dense,Input,Conv2D
from keras.layers import MaxPooling2D,GlobalMaxPooling2D,GlobalAveragePooling2D
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras import backend as K
from keras.applications.imagenet_utils import decode_predictions
# decode_predictions 输出5个最高概率:(类名, 语义概念, 预测概率) decode_predictions(y_pred)
from keras.applications.imagenet_utils import preprocess_input
# 预处理 图像编码服从规定,譬如,RGB,GBR这一类的,preprocess_input(x)
from keras_applications.imagenet_utils import _obtain_input_shape
# 确定适当的输入形状,相当于opencv中的read.img,将图像变为数组
from keras.engine.topology import get_source_inputs

WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5'
WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5'

def VGG16(include_top=True, weights='imagenet',
          input_tensor=None, input_shape=None,
          pooling=None,
          classes=1000):
    # 检查weight与分类设置是否正确
    if weights not in {'imagenet', None}:
        raise ValueError('The `weights` argument should be either '
                         '`None` (random initialization) or `imagenet` '
                         '(pre-training on ImageNet).')

    if weights == 'imagenet' and include_top and classes != 1000:
        raise ValueError('If using `weights` as imagenet with `include_top`'
                         ' as true, `classes` should be 1000')

    # 设置图像尺寸,类似caffe中的transform
    # Determine proper input shape
    input_shape = _obtain_input_shape(input_shape,
                                      default_size=224,
                                      min_size=48,
                                      # 模型所能接受的最小长宽
                                      data_format=K.image_data_format(),
                                      # 数据的使用格式
                                      require_flatten=include_top)
                                      #是否通过一个Flatten层再连接到分类器

    # 数据简单处理,resize
    if input_tensor is None:
        img_input = Input(shape=input_shape)
        # 这里的Input是keras的格式,可以用于转换
    else:
        if not K.is_keras_tensor(input_tensor):
            img_input = Input(tensor=input_tensor, shape=input_shape)
        else:
            img_input = input_tensor
        # 如果是tensor的数据格式,需要两步走:
        # 先判断是否是keras指定的数据类型,is_keras_tensor
        # 然后get_source_inputs(input_tensor)

    # 编写网络结构,prototxt
    # Block 1
    x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(img_input)
    x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)

    # Block 2
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)

    # Block 3
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)

    # Block 4
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)

    # Block 5
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)

    if include_top:
        # Classification block
        x = Flatten(name='flatten')(x)
        x = Dense(4096, activation='relu', name='fc1')(x)
        x = Dense(4096, activation='relu', name='fc2')(x)
        x = Dense(classes, activation='softmax', name='predictions')(x)
    else:
        if pooling == 'avg':
            x = GlobalAveragePooling2D()(x)
        elif pooling == 'max':
            x = GlobalMaxPooling2D()(x)

    # 调整数据
    # Ensure that the model takes into account
    # any potential predecessors of `input_tensor`.
    if input_tensor is not None:
        inputs = get_source_inputs(input_tensor)
        # get_source_inputs 返回计算需要的数据列表,List of input tensors.
        # 如果是tensor的数据格式,需要两步走:
        # 先判断是否是keras指定的数据类型,is_keras_tensor
        # 然后get_source_inputs(input_tensor)
    else:
        inputs = img_input

    # 创建模型
    # Create model.
    model = Model(inputs, x, name='vgg16')

    # 加载权重
    # load weights
    if weights == 'imagenet':
        if include_top:
            weights_path = get_file('vgg16_weights_tf_dim_ordering_tf_kernels.h5',
                                    WEIGHTS_PATH,
                                    cache_subdir='models')
        else:
            weights_path = get_file('vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5',
                                    WEIGHTS_PATH_NO_TOP,
                                    cache_subdir='models')
        model.load_weights(weights_path)

        if K.backend() == 'theano':
            layer_utils.convert_all_kernels_in_model(model)

        if K.image_data_format() == 'channels_first':
            if include_top:
                maxpool = model.get_layer(name='block5_pool')
                shape = maxpool.output_shape[1:]
                dense = model.get_layer(name='fc1')
                layer_utils.convert_dense_weights_data_format(dense, shape, 'channels_first')

            if K.backend() == 'tensorflow':
                warnings.warn('You are using the TensorFlow backend, yet you '
                              'are using the Theano '
                              'image data format convention '
                              '(`image_data_format="channels_first"`). '
                              'For best performance, set '
                              '`image_data_format="channels_last"` in '
                              'your Keras config '
                              'at ~/.keras/keras.json.')
    return model

if __name__ == '__main__':
    model = VGG16(include_top=True, weights='imagenet')

    img_path = 'elephant.jpg'
    img = image.load_img(img_path, target_size=(224, 224))
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)
    x = preprocess_input(x)
    print('Input image shape:', x.shape)

    preds = model.predict(x)
    print('Predicted:', decode_predictions(preds))
    # decode_predictions 输出5个最高概率:(类名, 语义概念, 预测概率)

输出:

Input image shape: (1, 224, 224, 3)
Predicted: [[('n02504458', 'African_elephant', 0.62728244), ('n02504013', 'Indian_elephant', 0.19092941), ('n01871265', 'tusker', 0.18166111), ('n02437312', 'Arabian_camel', 4.5080957e-05), ('n07802026', 'hay', 1.7709652e-05)]]
  • 将model下载到本地,修改下载的代码
    注释掉下面两行:
    WEIGHTS_PATH = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5'
    WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5'
    修改下面两行:
    weights_path = get_file('vgg16_weights_tf_dim_ordering_tf_kernels.h5',WEIGHTS_PATH,cache_subdir='models')
    weights_path = get_file('vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5',WEIGHTS_PATH_NO_TOP,cache_subdir='models')
  • 几个layer中的新用法
    from keras.applications.imagenet_utils import decode_predictions
    decode_predictions 输出5个最高概率:(类名, 语义概念, 预测概率) decode_predictions(y_pred)
    from keras.applications.imagenet_utils import preprocess_input
    预处理 图像编码服从规定,譬如,RGB,GBR这一类的,preprocess_input(x)
    from keras.applications.imagenet_utils import _obtain_input_shape
    确定适当的输入形状,相当于opencv中的read.img,将图像变为数组
    (1)decode_predictions用在最后输出结果上,比较好用【print(‘Predicted:’, decode_predictions(preds))】;
    (2)preprocess_input,改变编码,【preprocess_input(x)】;
    (3)_obtain_input_shape
    相当于caffe中的transform,在预测的时候,需要对预测的图片进行一定的预处理。
    input_shape = _obtain_input_shape(input_shape,default_size=224,min_size=48,data_format=K.image_data_format(),include_top=include_top)
    min_size=48,模型所能接受的最小长宽
    data_format=K.image_data_format(),数据的使用格式
  • 当include_top=True时
    fc_model = VGG16(include_top=True)
    notop_model = VGG16(include_top=False)
    用VGG16做fine-tuning的时候,得到的notop_model就是没有全连接层的模型,然后再去添加自己的层。
    当是健全的网络结构的时候,fc_model需要添加以下的内容以补全网络结构:
x = Flatten(name='flatten')(x)
x = Dense(4096, activation='relu', name='fc1')(x)
x = Dense(4096, activation='relu', name='fc2')(x)
x = Dense(classes, activation='softmax', name='predictions')(x)

pool层之后接一个flatten层,修改数据格式,然后接两个dense层,最后有softmax的Dense层

  • channels_first转成channels_last格式
 maxpool = model.get_layer(name='block5_pool')
 # model.get_layer()依据层名或下标获得层对象
 shape = maxpool.output_shape[1:]
 # 获取block5_pool层输出的数据格式
 dense = model.get_layer(name='fc1')
 layer_utils.convert_dense_weights_data_format(dense, shape, 'channels_first')

convert_dense_weights_data_format将convnet的权重从一种数据格式移植到另一种数据格式时,如果convnet包含一个平坦层(应用于最后一个卷积特征映射),然后是一个密集层,则应更新该密集层的权重,以反映新的维度顺序。

posted @ 2019-07-10 16:31  鹏懿如斯  阅读(2799)  评论(0编辑  收藏  举报