keras用法

关于Keras的“层”(Layer)

所有的Keras层对象都有如下方法:

  • layer.get_weights():返回层的权重(numpy array)

  • layer.set_weights(weights):从numpy array中将权重加载到该层中,要求numpy array的形状与* layer.get_weights()的形状相同

  • layer.get_config():返回当前层配置信息的字典,层也可以借由配置信息重构:

Input(shape=None,batch_shape=None,name=None,dtype=K.floatx(),sparse=False,tensor=None)

Input():用来实例化一个keras张量

keras张量是来自底层后端(Theano或Tensorflow)的张量对象,我们增加了某些属性,使我们通过知道模型的输入和输出来构建keras模型。

添加的keras属性有:1)._keras_shape:整型的形状元组通过keras-side 形状推理传播  2)._keras_history: 最后一层应用于张量,整个图层的图可以从那个层,递归地检索出来。

#参数:

shape: 形状元组(整型),不包括batch size。for instance, shape=(32,) 表示了预期的输入将是一批32维的向量。

batch_shape: 形状元组(整型),包括了batch size。for instance, batch_shape=(10,32)表示了预期的输入将是10个32维向量的批次。

name: 对于该层是可选的名字字符串。在一个模型中是独一无二的(同一个名字不能复用2次)。如果name没有被特指将会自动生成。

dtype: 预期的输入数据类型

sparse: 特定的布尔值,占位符是否为sparse

tensor: 可选的存在的向量包装到Input层,如果设置了,该层将不会创建一个占位张量。

#返回

一个张量

#例子

x=Input(shape=(32,))

y=Dense(16,activation='softmax')(x)

model=Model(x,y)

 

keras里面一些常用的单元:

import keras.layers as KL

二维卷积:

KL.Conv2d()

KL.Activation()

Definition : Activation(self, activation, **kwargs)

举例: x = KL.Activation('relu')(x)

KL.Add()

举例:

x = KL.Add()(shortcut,x)

ayer that adds a list of inputs.

It takes as input a list of tensors, all of the same shape, and returns a single tensor (also of the same shape).

Definition : KL.ZeroPadding2D(padding=(1, 1), data_format=None, **kwargs)

class BatchNorm(KL.BatchNormalization):
    """Extends the Keras BatchNormalization class to allow a central place
    to make changes if needed.

    Batch normalization has a negative effect on training if batches are small
    so this layer is often frozen (via setting in Config class) and functions
    as linear layer.
    """
    def call(self, inputs, training=None):
        """
        Note about training values:
            None: Train BN layers. This is the normal mode
            False: Freeze BN layers. Good when batch size is small
            True: (don't use). Set layer in training mode even when making inferences
        """
        return super(self.__class__, self).call(inputs, training=training)

 代码:

对super(self.__calss__,self)中的self.__class__

self、 superclass 、 super

self : 当前方法的调用者
class:获取方法调用者的类对象
superclass:获取方法调用者的父类对象

class BatchNorm(BatchNormalization):

Extends the Keras BatchNormalization class to allow a central place to make changes if needed.

Batch normalization has a negative effect on training if batches are small so this layer is often frozen (via setting in Config class) and functions as linear layer.

 

posted @ 2019-01-12 17:24  小可爱466  阅读(1260)  评论(0编辑  收藏  举报