微信扫一扫打赏支持

tensorflow2知识总结---8、函数式API

tensorflow2知识总结---8、函数式API

一、总结

一句话总结:

tensorflow2函数式API是可以非常方便的构建出复杂的神经网络,比如多输入多输出类型的
input=keras.Input(shape=(28,28))
x=keras.layers.Flatten()(input)
x=keras.layers.Dense(64,activation='relu')(x)
x=keras.layers.Dropout(0.5)(x)
x=keras.layers.Dense(32,activation='relu')(x)
output=keras.layers.Dense(10,activation='softmax')(x)
model=keras.Model(inputs=input,outputs=output)
model.summary()

 

 

 

二、函数式API

博客对应课程的视频位置:

 

tensorflow2函数式API是可以非常方便的构建出复杂的神经网络,比如多输入多输出类型的

In [1]:
import tensorflow as tf
from tensorflow import keras 
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
%matplotlib inline
In [2]:
# 获取训练数据集和测试数据集
# 路径:C:\Users\Fan Renyi\.keras\datasets\fashion-mnist
(train_image,train_label),(test_image,test_label) = tf.keras.datasets.fashion_mnist.load_data()
In [3]:
# 图片数据如何归一化
train_image = train_image/255
test_image = test_image/255
In [8]:
input=keras.Input(shape=(28,28))
x=keras.layers.Flatten()(input)
x=keras.layers.Dense(64,activation='relu')(x)
x=keras.layers.Dropout(0.5)(x)
x=keras.layers.Dense(32,activation='relu')(x)
output=keras.layers.Dense(10,activation='softmax')(x)
model=keras.Model(inputs=input,outputs=output)
model.summary()
Model: "model_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_3 (InputLayer)         [(None, 28, 28)]          0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 784)               0         
_________________________________________________________________
dense_6 (Dense)              (None, 64)                50240     
_________________________________________________________________
dropout_2 (Dropout)          (None, 64)                0         
_________________________________________________________________
dense_7 (Dense)              (None, 32)                2080      
_________________________________________________________________
dense_8 (Dense)              (None, 10)                330       
=================================================================
Total params: 52,650
Trainable params: 52,650
Non-trainable params: 0
_________________________________________________________________
<tensorflow.python.keras.engine.training.Model object at 0x0000028AC3AA6B08>
In [9]:
print(mode1)
<tensorflow.python.keras.engine.training.Model object at 0x0000028AC3A7E888>
In [10]:
model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.01),
              loss='sparse_categorical_crossentropy',
              metrics=['acc'])

history = model.fit(train_image,train_label,epochs=30,
                    validation_data=(test_image,test_label))
Epoch 1/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.9550 - acc: 0.6269 - val_loss: 0.7303 - val_acc: 0.7226
Epoch 2/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8362 - acc: 0.6787 - val_loss: 0.7595 - val_acc: 0.7187
Epoch 3/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8253 - acc: 0.6852 - val_loss: 0.6135 - val_acc: 0.7543
Epoch 4/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8054 - acc: 0.6969 - val_loss: 0.6019 - val_acc: 0.7872
Epoch 5/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8001 - acc: 0.7033 - val_loss: 0.6251 - val_acc: 0.7614
Epoch 6/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.7916 - acc: 0.7082 - val_loss: 0.6266 - val_acc: 0.7663
Epoch 7/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.7919 - acc: 0.7090 - val_loss: 0.6139 - val_acc: 0.7707
Epoch 8/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.7976 - acc: 0.7038 - val_loss: 0.6012 - val_acc: 0.7802
Epoch 9/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8245 - acc: 0.6961 - val_loss: 0.5929 - val_acc: 0.7637
Epoch 10/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8157 - acc: 0.6959 - val_loss: 0.6063 - val_acc: 0.7431
Epoch 11/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8054 - acc: 0.6997 - val_loss: 0.5605 - val_acc: 0.8064
Epoch 12/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8023 - acc: 0.7043 - val_loss: 0.6163 - val_acc: 0.7697
Epoch 13/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8068 - acc: 0.7002 - val_loss: 0.6083 - val_acc: 0.7480
Epoch 14/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8110 - acc: 0.6982 - val_loss: 0.6531 - val_acc: 0.7404
Epoch 15/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8039 - acc: 0.7028 - val_loss: 0.6149 - val_acc: 0.7690
Epoch 16/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8020 - acc: 0.7039 - val_loss: 0.6487 - val_acc: 0.7590
Epoch 17/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8044 - acc: 0.7038 - val_loss: 0.6163 - val_acc: 0.7653
Epoch 18/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.7928 - acc: 0.7079 - val_loss: 0.6226 - val_acc: 0.7731
Epoch 19/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8018 - acc: 0.7047 - val_loss: 0.7393 - val_acc: 0.7364
Epoch 20/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.7838 - acc: 0.7116 - val_loss: 0.5742 - val_acc: 0.8015
Epoch 21/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.7877 - acc: 0.7105 - val_loss: 0.6383 - val_acc: 0.7684
Epoch 22/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.7844 - acc: 0.7122 - val_loss: 0.6586 - val_acc: 0.7664
Epoch 23/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8024 - acc: 0.7048 - val_loss: 0.6223 - val_acc: 0.7424
Epoch 24/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.7857 - acc: 0.7121 - val_loss: 0.6373 - val_acc: 0.7774
Epoch 25/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8007 - acc: 0.7031 - val_loss: 0.6405 - val_acc: 0.7751
Epoch 26/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.7946 - acc: 0.7076 - val_loss: 0.6030 - val_acc: 0.7904
Epoch 27/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.7868 - acc: 0.7135 - val_loss: 0.6948 - val_acc: 0.7554
Epoch 28/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.7982 - acc: 0.7106 - val_loss: 0.6872 - val_acc: 0.7361
Epoch 29/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.7915 - acc: 0.7095 - val_loss: 0.6463 - val_acc: 0.7810
Epoch 30/30
1875/1875 [==============================] - 2s 1ms/step - loss: 0.8175 - acc: 0.6941 - val_loss: 0.7145 - val_acc: 0.7408
In [12]:
plt.rcParams["font.sans-serif"]=["SimHei"]
plt.rcParams["font.family"]="sans-serif"
plt.plot(history.epoch, history.history.get('loss'),"r-",linewidth=2,label="训练集:loss")
plt.plot(history.epoch, history.history.get('val_loss'),"g-",linewidth=2,label="测试集:val_loss")
plt.legend(loc ="upper right")
Out[12]:
<matplotlib.legend.Legend at 0x28ac5bbd588>
In [13]:
plt.plot(history.epoch, history.history.get('acc'),"r-",linewidth=2,label="训练集:acc")
plt.plot(history.epoch, history.history.get('val_acc'),"g-",linewidth=2,label="测试集:val_acc")
plt.legend(loc ="upper right")
Out[13]:
<matplotlib.legend.Legend at 0x28ac63e6748>
 

多输入测试

In [14]:
# 两个输入层
input1=keras.Input(shape=(28,28))
input2=keras.Input(shape=(28,28))
x1=keras.layers.Flatten()(input1)
x2=keras.layers.Flatten()(input2)
# 合并输入
x=keras.layers.concatenate([x1,x2])

x=keras.layers.Dense(64,activation='relu')(x)
output=keras.layers.Dense(1,activation='sigmoid')(x)

model=keras.Model(inputs=[input1,input2],outputs=output)
model.summary()
Model: "model_3"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_4 (InputLayer)            [(None, 28, 28)]     0                                            
__________________________________________________________________________________________________
input_5 (InputLayer)            [(None, 28, 28)]     0                                            
__________________________________________________________________________________________________
flatten_3 (Flatten)             (None, 784)          0           input_4[0][0]                    
__________________________________________________________________________________________________
flatten_4 (Flatten)             (None, 784)          0           input_5[0][0]                    
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 1568)         0           flatten_3[0][0]                  
                                                                 flatten_4[0][0]                  
__________________________________________________________________________________________________
dense_9 (Dense)                 (None, 64)           100416      concatenate[0][0]                
__________________________________________________________________________________________________
dense_10 (Dense)                (None, 1)            65          dense_9[0][0]                    
==================================================================================================
Total params: 100,481
Trainable params: 100,481
Non-trainable params: 0
__________________________________________________________________________________________________
In [ ]:
 

 

 
posted @ 2020-07-29 06:05  范仁义  阅读(441)  评论(0编辑  收藏  举报