Caffe 层

卷积神经网络(Convolutional Neural Network, CNN)是一种前馈神经网络,它的人工神经元可以响应一部分覆盖范围内的周围单元,[1]对于大型图像处理有出色表现。

Deep Neural Network(DNN)模型是基本的深度学习框架

递归神经网络(RNN)是两种人工神经网络的总称。一种是时间递归神经网络(recurrent neural network),另一种是结构递归神经网络(recursive neural network)。时间递归神经网络的神经元间连接构成矩阵,而结构递归神经网络利用相似的神经网络结构递归构造更为复杂的深度网络。RNN一般指代时间递归神经网络。单纯递归神经网络因为无法处理随着递归,权重指数级爆炸或消失的问题(Vanishing gradient problem),难以捕捉长期时间关联;而结合不同的LSTM可以很好解决这个问题。

# bottom = last top
name: "LeNet"
# 数据层
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
# 数据层
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
# 卷积层
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
# 池化层
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
# 卷积层
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
# 池化层
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
# 全连接层
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
# ReLU层
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
# 全连接层
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
# 损失层/预测精度
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
# 损失层
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
数据层 Data Layers
  • Image Data - read raw images.
  • Database - read data from LEVELDB or LMDB.
  • HDF5 Input - read HDF5 data, allows data of arbitrary dimensions.
  • HDF5 Output - write data as HDF5.
  • Input - typically used for networks that are being deployed.
  • Window Data - read window data file.
  • Memory Data - read data directly from memory.
  • Dummy Data - for static data and debugging.
视觉层 Vision Layers
  • Convolution Layer - convolves the input image with a set of learnable filters, each producing one feature map in the output image.
  • Pooling Layer - max, average, or stochastic pooling.
  • Spatial Pyramid Pooling (SPP)
  • Crop - perform cropping transformation.
  • Deconvolution Layer - transposed convolution.
  • Im2Col - relic helper layer that is not used much anymore.
经常层 Recurrent Layers
  • Recurrent
  • RNN
  • Long-Short Term Memory (LSTM)
普通层 Common Layers
  • Inner Product - fully connected layer.
  • Dropout
  • Embed - for learning embeddings of one-hot encoded vector (takes index as input).
归一化层 Normalization Layers
  • Local Response Normalization (LRN) - performs a kind of “lateral inhibition” by normalizing over local input regions.
  • Mean Variance Normalization (MVN) - performs contrast normalization / instance normalization.
  • Batch Normalization - performs normalization over mini-batches.
激活/神经元层 Activation / Neuron Layers
  • ReLU / Rectified-Linear and Leaky-ReLU - ReLU and Leaky-ReLU rectification.
  • PReLU - parametric ReLU.
  • ELU - exponential linear rectification.
  • Sigmoid
  • TanH
  • Absolute Value
  • Power - f(x) = (shift + scale * x) ^ power.
  • Exp - f(x) = base ^ (shift + scale * x).
  • Log - f(x) = log(x).
  • BNLL - f(x) = log(1 + exp(x)).
  • Threshold - performs step function at user defined threshold.
  • Bias - adds a bias to a blob that can either be learned or fixed.
  • Scale - scales a blob by an amount that can either be learned or fixed.
实用层 Utility Layers
  • Flatten
    *Reshape
  • Batch Reindex
  • Split
  • Concat
  • Slicing
  • Eltwise - element-wise operations such as product or sum between two blobs.
  • Filter / Mask - mask or select output using last blob.
  • Parameter - enable parameters to be shared between layers.
  • Reduction - reduce input blob to scalar blob using operations such as sum or mean.
  • Silence - prevent top-level blobs from being printed during training.
  • ArgMax
  • Softmax
  • Python - allows custom Python layers.
损失层 Loss Layers
  • Multinomial Logistic Loss
  • Infogain Loss - a generalization of MultinomialLogisticLossLayer.
  • Softmax with Loss - computes the multinomial logistic loss of the softmax of its inputs. It’s conceptually identical to a softmax layer followed by a multinomial logistic loss layer, but provides a more numerically stable gradient.
  • Sum-of-Squares / Euclidean - computes the sum of squares of differences of its two inputs, 12N∑Ni=1∥x1i−x2i∥2212N∑i=1N‖xi1−xi2‖22
  • Hinge / Margin - The hinge loss layer computes a one-vs-all hinge (L1) or squared hinge loss (L2).
  • Sigmoid Cross-Entropy Loss - computes the cross-entropy (logistic) loss, often used for predicting targets interpreted as probabilities.
  • Accuracy / Top-k layer - scores the output as an accuracy with respect to target – it is not actually a loss and has no backward step.
  • Contrastive Loss
posted @ 2017-10-28 11:42  學海無涯  阅读(506)  评论(0编辑  收藏  举报