3D卷积,代码实现

3D卷积,代码实现

三维卷积:理解+用例-发现

在图像卷积神经网络内核中,包含3D卷积及其在3D MNIST数据集上的实现。

 

什么是卷积?

 

从数学上讲,卷积是一种积分函数,表示一个函数g在另一个函数f上移动时的重叠量。

 

直觉地说,卷积就像一个混合器,将一个函数与另一个函数混合在一起,在保留信息的同时,减少数据空间。

 

在神经网络和深度学习方面:

 

卷积是具有可训练参数的滤波器(矩阵/向量),用于从输入数据中提取低维特征。

 

具有保留输入数据点之间的空间或位置关系的特性。

 

卷积神经网络增强相邻层神经元之间的局部连接模式,空间局部相关性。

 

直观地说,卷积是在输入上应用滑动窗口(具有可训练权重的滤波器)的概念,产生(权重和输入的)加权和输出的步骤。加权下一层输入的特征空间。

 

例如,在人脸识别问题中,前几个卷积层训练输入图像中关键点,下一个卷积层训练边缘和形状,最后一个卷积层训练人脸。在本例中,首先将输入空间缩减为较低维空间(表示关于点/像素的信息),然后将该空间缩减为包含(边/形状)的另一空间,最后缩减对图像中的面进行分类。卷积可以应用于N维。

卷积类型:

 

让我们来讨论什么是不同类型的卷积

 

1D Convolutions 一维卷积

 

最简单的卷积是一维卷积,通常用于序列数据集(也可以用于其它用例)。可用于从输入序列中提取局部一维子序列,并在卷积窗口内识别局部模式。下图显示了如何将一维卷积滤波器应用于序列,获得新特征。一维卷积的其它常见用法见于NLP领域,其中每个句子都表示为一个单词序列。

2D Convolutions 二维卷积

在图像数据集上,CNN结构中主要使用二维卷积滤波器。二维卷积的主要思想,卷积滤波器在两个方向(x,y)上移动,从图像数据中计算低维特征。输出形状也是一个二维矩阵。

 

3D Convolutions 三维卷积

三维卷积对数据集应用三维过滤器,过滤器向3个方向(x,y,z)移动,计算低层特征表示。输出形状是一个三维体积空间,如立方体或长方体。有助于视频、三维医学图像等的目标物检测。不仅限于三维空间,还可以应用于二维空间输入,如图像。

 

在3D Mnist数据集上实现3D CNN。首先,导入key数据库。

 

此外,还有其他类型的卷积:

Dilated Convolutions 膨胀卷积/空洞卷积

扩展或高级卷积定义了内核中值之间的间距。在这种类型的卷积中,由于间距的原因,内核的可接受视图增加,例如,一个3x3内核的膨胀率为2,视野将与5x5内核相同。复杂性保持不变,但在这种情况下会生成不同的特征。

在3D mnist数据集上,创建一个3D卷积神经网络结构。

 

 

from keras.layers import Conv3D, MaxPool3D, Flatten, Dense

from keras.layers import Dropout, Input, BatchNormalization

from sklearn.metrics import confusion_matrix, accuracy_score

from plotly.offline import iplot, init_notebook_mode

from keras.losses import categorical_crossentropy

from keras.optimizers import Adadelta

import plotly.graph_objs as go

from matplotlib.pyplot import cm

from keras.models import Model

import numpy as np

import keras

import h5py

 

init_notebook_mode(connected=True)

%matplotlib inline

 

使用 TensorFlow backend后端.

3D MNIST数据以.h5格式给出,将完整的数据集加载到训练集和测试集中。

with h5py.File('../input/full_dataset_vectors.h5', 'r') as dataset:
    x_train = dataset["X_train"][:]
    x_test = dataset["X_test"][:]
    y_train = dataset["y_train"][:]
    y_test = dataset["y_test"][:]

 

观察数据集维度:

print ("x_train shape: ", x_train.shape)

print ("y_train shape: ", y_train.shape)

 

print ("x_test shape:  ", x_test.shape)

print ("y_test shape:  ", y_test.shape)

x_train shape:  (10000, 4096)

y_train shape:  (10000,)

x_test shape:   (2000, 4096)

y_test shape:   (2000,)

 

这个数据集是一个平面的一维数据,在一个单独的数据文件中共享了原始的x,y,z。在三维空间中绘制一个数字。旋转三维数字,查看效果。

 

 

 现在,在这个数据集上实现一个三维卷积神经网络。为了使用二维卷积,首先将每个图像转换成三维形状:宽度、高度、通道。通道表示红色、绿色和蓝色层的切片。以类似的方式,将输入数据集转换为4D形状,以便使用三维卷积:长度、宽度、高度、通道(r/g/b)。

## Introduce the channel dimention in the input dataset 
xtrain = np.ndarray((x_train.shape[0], 4096, 3))
xtest = np.ndarray((x_test.shape[0], 4096, 3))
 
## iterate in train and test, add the rgb dimention 
def add_rgb_dimention(array):
    scaler_map = cm.ScalarMappable(cmap="Oranges")
    array = scaler_map.to_rgba(array)[:, : -1]
    return array
for i in range(x_train.shape[0]):
    xtrain[i] = add_rgb_dimention(x_train[i])
for i in range(x_test.shape[0]):
    xtest[i] = add_rgb_dimention(x_test[i])
 
## convert to 1 + 4D space (1st argument represents number of rows in the dataset)
xtrain = xtrain.reshape(x_train.shape[0], 16, 16, 16, 3)
xtest = xtest.reshape(x_test.shape[0], 16, 16, 16, 3)
 
## convert target variable into one-hot
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)

y_train.shape

(10000, 10)

Lets create the model architecture. The architecture is described below:

 

Input and Output layers:

 

One Input layer with dimentions 16, 16, 16, 3

Output layer with dimentions 10

Convolutions :

 

Apply 4 Convolutional layer with increasing order of filter size (standard size : 8, 16, 32, 64) and fixed kernel size = (3, 3, 3)

Apply 2 Max Pooling layers, one after 2nd convolutional layer and one after fourth convolutional layer.

MLP architecture:

 

Batch normalization on convolutiona architecture

Dense layers with 2 layers followed by dropout to avoid overfitting

## input layer

input_layer = Input((16, 16, 16, 3))

 

## convolutional layers

conv_layer1 = Conv3D(filters=8, kernel_size=(3, 3, 3), activation='relu')(input_layer)

conv_layer2 = Conv3D(filters=16, kernel_size=(3, 3, 3), activation='relu')(conv_layer1)

 

## add max pooling to obtain the most imformatic features

pooling_layer1 = MaxPool3D(pool_size=(2, 2, 2))(conv_layer2)

 

conv_layer3 = Conv3D(filters=32, kernel_size=(3, 3, 3), activation='relu')(pooling_layer1)

conv_layer4 = Conv3D(filters=64, kernel_size=(3, 3, 3), activation='relu')(conv_layer3)

pooling_layer2 = MaxPool3D(pool_size=(2, 2, 2))(conv_layer4)

 

## perform batch normalization on the convolution outputs before feeding it to MLP architecture

pooling_layer2 = BatchNormalization()(pooling_layer2)

flatten_layer = Flatten()(pooling_layer2)

 

## create an MLP architecture with dense layers : 4096 -> 512 -> 10

## add dropouts to avoid overfitting / perform regularization

dense_layer1 = Dense(units=2048, activation='relu')(flatten_layer)

dense_layer1 = Dropout(0.4)(dense_layer1)

dense_layer2 = Dense(units=512, activation='relu')(dense_layer1)

dense_layer2 = Dropout(0.4)(dense_layer2)

output_layer = Dense(units=10, activation='softmax')(dense_layer2)

 

## define the model with input layer and output layer

model = Model(inputs=input_layer, outputs=output_layer)

Compile the model and start training.

 

model.compile(loss=categorical_crossentropy, optimizer=Adadelta(lr=0.1), metrics=['acc'])

model.fit(x=xtrain, y=y_train, batch_size=128, epochs=50, validation_split=0.2)

Train on 8000 samples, validate on 2000 samples

Epoch 1/50

8000/8000 [==============================] - 8s 1ms/step - loss: 2.1643 - acc: 0.2400 - val_loss: 4.1364 - val_acc: 0.1595

Epoch 2/50

8000/8000 [==============================] - 3s 389us/step - loss: 1.7002 - acc: 0.4255 - val_loss: 2.6611 - val_acc: 0.2830

Epoch 3/50

8000/8000 [==============================] - 3s 389us/step - loss: 1.3900 - acc: 0.5319 - val_loss: 1.7843 - val_acc: 0.4425

Epoch 4/50

8000/8000 [==============================] - 3s 390us/step - loss: 1.2224 - acc: 0.5872 - val_loss: 2.4387 - val_acc: 0.3545

Epoch 5/50

8000/8000 [==============================] - 3s 393us/step - loss: 1.1250 - acc: 0.6149 - val_loss: 1.6011 - val_acc: 0.4820

Epoch 6/50

8000/8000 [==============================] - 3s 386us/step - loss: 1.0584 - acc: 0.6379 - val_loss: 1.9631 - val_acc: 0.3940

Epoch 7/50

8000/8000 [==============================] - 3s 385us/step - loss: 1.0012 - acc: 0.6509 - val_loss: 2.7977 - val_acc: 0.3435

Epoch 8/50

8000/8000 [==============================] - 3s 385us/step - loss: 0.9556 - acc: 0.6706 - val_loss: 1.3028 - val_acc: 0.5515

Epoch 9/50

8000/8000 [==============================] - 3s 388us/step - loss: 0.9101 - acc: 0.6893 - val_loss: 1.3699 - val_acc: 0.5525

Epoch 10/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.8759 - acc: 0.7000 - val_loss: 1.5005 - val_acc: 0.5080

Epoch 11/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.8387 - acc: 0.7126 - val_loss: 1.4767 - val_acc: 0.5215

Epoch 12/50

8000/8000 [==============================] - 3s 388us/step - loss: 0.8098 - acc: 0.7246 - val_loss: 1.6518 - val_acc: 0.5250

Epoch 13/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.7806 - acc: 0.7324 - val_loss: 1.2170 - val_acc: 0.5900

Epoch 14/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.7584 - acc: 0.7442 - val_loss: 1.3042 - val_acc: 0.5840

Epoch 15/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.7239 - acc: 0.7542 - val_loss: 1.0767 - val_acc: 0.6480

Epoch 16/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.6997 - acc: 0.7602 - val_loss: 1.1681 - val_acc: 0.6200

Epoch 17/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.6756 - acc: 0.7702 - val_loss: 1.1535 - val_acc: 0.6295

Epoch 18/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.6450 - acc: 0.7759 - val_loss: 1.3781 - val_acc: 0.5975

Epoch 19/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.6229 - acc: 0.7927 - val_loss: 1.2891 - val_acc: 0.6145

Epoch 20/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.6027 - acc: 0.7996 - val_loss: 1.2839 - val_acc: 0.6060

Epoch 21/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.5727 - acc: 0.8088 - val_loss: 1.7544 - val_acc: 0.5350

Epoch 22/50

8000/8000 [==============================] - 3s 387us/step - loss: 0.5555 - acc: 0.8151 - val_loss: 1.3720 - val_acc: 0.5965

Epoch 23/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.5308 - acc: 0.8246 - val_loss: 1.2582 - val_acc: 0.6400

Epoch 24/50

8000/8000 [==============================] - 3s 394us/step - loss: 0.5077 - acc: 0.8286 - val_loss: 1.3886 - val_acc: 0.6085

Epoch 25/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.4869 - acc: 0.8400 - val_loss: 1.2946 - val_acc: 0.6315

Epoch 26/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.4634 - acc: 0.8512 - val_loss: 1.3686 - val_acc: 0.6220

Epoch 27/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.4487 - acc: 0.8529 - val_loss: 1.8458 - val_acc: 0.5635

Epoch 28/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.4297 - acc: 0.8616 - val_loss: 1.7958 - val_acc: 0.5485

Epoch 29/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.4067 - acc: 0.8669 - val_loss: 1.2551 - val_acc: 0.6475

Epoch 30/50

8000/8000 [==============================] - 3s 388us/step - loss: 0.3832 - acc: 0.8762 - val_loss: 1.4216 - val_acc: 0.6190

Epoch 31/50

8000/8000 [==============================] - 3s 388us/step - loss: 0.3730 - acc: 0.8790 - val_loss: 1.3635 - val_acc: 0.6335

Epoch 32/50

8000/8000 [==============================] - 3s 388us/step - loss: 0.3535 - acc: 0.8840 - val_loss: 1.6396 - val_acc: 0.6040

Epoch 33/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.3298 - acc: 0.8970 - val_loss: 1.5481 - val_acc: 0.6355

Epoch 34/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.3281 - acc: 0.8912 - val_loss: 1.7711 - val_acc: 0.5945

Epoch 35/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.3013 - acc: 0.9031 - val_loss: 1.7350 - val_acc: 0.5885

Epoch 36/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.2862 - acc: 0.9096 - val_loss: 2.2285 - val_acc: 0.5195

Epoch 37/50

8000/8000 [==============================] - 3s 392us/step - loss: 0.2735 - acc: 0.9150 - val_loss: 1.8348 - val_acc: 0.5965

Epoch 38/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.2565 - acc: 0.9201 - val_loss: 1.5115 - val_acc: 0.6410

Epoch 39/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.2498 - acc: 0.9205 - val_loss: 1.6900 - val_acc: 0.6300

Epoch 40/50

8000/8000 [==============================] - 3s 387us/step - loss: 0.2228 - acc: 0.9335 - val_loss: 1.6331 - val_acc: 0.6475

Epoch 41/50

8000/8000 [==============================] - 3s 387us/step - loss: 0.2137 - acc: 0.9320 - val_loss: 1.6562 - val_acc: 0.6305

Epoch 42/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.2053 - acc: 0.9399 - val_loss: 1.7376 - val_acc: 0.6190

Epoch 43/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.1885 - acc: 0.9436 - val_loss: 1.8600 - val_acc: 0.6155

Epoch 44/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.1756 - acc: 0.9481 - val_loss: 1.9500 - val_acc: 0.6335

Epoch 45/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.1688 - acc: 0.9496 - val_loss: 2.2368 - val_acc: 0.5805

Epoch 46/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.1582 - acc: 0.9540 - val_loss: 2.0403 - val_acc: 0.6175

Epoch 47/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.1462 - acc: 0.9603 - val_loss: 1.8678 - val_acc: 0.6270

Epoch 48/50

8000/8000 [==============================] - 3s 390us/step - loss: 0.1376 - acc: 0.9624 - val_loss: 2.4479 - val_acc: 0.5640

Epoch 49/50

8000/8000 [==============================] - 3s 391us/step - loss: 0.1304 - acc: 0.9641 - val_loss: 2.5482 - val_acc: 0.5750

Epoch 50/50

8000/8000 [==============================] - 3s 389us/step - loss: 0.1260 - acc: 0.9634 - val_loss: 2.0320 - val_acc: 0.6220

<keras.callbacks.History at 0x7fd2bcb420b8>

在模型训练中,可以观察到验证集的精度是波动的,表明网络可以进一步改进。预测和测量当前模型的准确性

pred = model.predict(xtest)

pred = np.argmax(pred, axis=1)

pred

array([7, 6, 1, ..., 3, 4, 4])

目前,该模型并不精确,但可以通过架构改进和超参数调整,进一步改进。

 

 

参考链接:

https://www.kaggle.com/shivamb/3d-convolutions-understanding-use-case?scriptVersionId=9626233

posted @ 2021-07-14 06:14  吴建明wujianming  阅读(1593)  评论(0编辑  收藏  举报