CNN 文本分类模型优化经验——关键点:加卷积层和FC可以提高精度,在FC前加BN可以加快收敛,有时候可以提高精度,FC后加dropout,conv_1d的input维度加大可以提高精度,但是到256会出现OOM。

复制代码
    network = tflearn.input_data(shape=[None, max_len], name='input')
    network = tflearn.embedding(network, input_dim=volcab_size, output_dim=32)

    network = conv_1d(network, 64, 3, activation='relu', regularizer="L2")
    network = max_pool_1d(network, 2)
    network = conv_1d(network, 64, 3, activation='relu', regularizer="L2")
    network = max_pool_1d(network, 2)
    #network = conv_1d(network, 64, 3, activation='relu', regularizer="L2")
    #network = max_pool_1d(network, 2)

    network = batch_normalization(network)

    #network = fully_connected(network, 512, activation='relu')
    #network = dropout(network, 0.5)
    network = fully_connected(network, 64, activation='relu')
    network = dropout(network, 0.5)

    network = fully_connected(network, 2, activation='softmax')
复制代码

迭代一次,acc是98.5%多一点。

如果使用:

复制代码
# 关于一维CNN的网络,例子较少
# https://github.com/tflearn/tflearn/blob/master/examples/nlp/cnn_sentence_classification.py
# Building convolutional network
network = input_data(shape=[None, 100], name='input')
network = tflearn.embedding(network, input_dim=10000, output_dim=128)
branch1 = conv_1d(network, 128, 3, padding='valid', activation='relu', regularizer="L2")
branch2 = conv_1d(network, 128, 4, padding='valid', activation='relu', regularizer="L2")
branch3 = conv_1d(network, 128, 5, padding='valid', activation='relu', regularizer="L2")
network = merge([branch1, branch2, branch3], mode='concat', axis=1)
network = tf.expand_dims(network, 2)
network = global_max_pool(network)
network = dropout(network, 0.5)
network = fully_connected(network, 2, activation='softmax')
network = regression(network, optimizer='adam', learning_rate=0.001,
                     loss='categorical_crossentropy', name='target')
# Training
model = tflearn.DNN(network, tensorboard_verbose=0)
复制代码

acc是95%多一点点。

使用类似 vgg的模型, https://github.com/AhmetHamzaEmra/tflearn/blob/master/examples/images/VGG19.py

复制代码
    network = tflearn.input_data(shape=[None, max_len], name='input')
    network = tflearn.embedding(network, input_dim=volcab_size, output_dim=64)
    network = conv_1d(network, 64, 3, activation='relu')
    network = conv_1d(network, 64, 3, activation='relu')
    network = max_pool_1d(network, 2, strides=2)
    network = conv_1d(network, 128, 3, activation='relu')
    network = conv_1d(network, 128, 3, activation='relu')
    network = max_pool_1d(network, 2, strides=2)
    network = conv_1d(network, 256, 3, activation='relu')
    network = conv_1d(network, 256, 3, activation='relu')
    network = conv_1d(network, 256, 3, activation='relu')
    network = max_pool_1d(network, 2, strides=2)
    network = batch_normalization(network)
    network = fully_connected(network, 512, activation='relu')
    network = dropout(network, 0.5)
    network = fully_connected(network, 2, activation='softmax')
复制代码

acc是98.5%多一点,稍微比第一种模型高,但是训练时间太长。

其他的,本质上都是加卷积层或者FC:

复制代码
。。。   
network = conv_1d(network, 64, 3, activation='relu', regularizer="L2") network = max_pool_1d(network, 2) network = conv_1d(network, 64, 3, activation='relu', regularizer="L2") network = max_pool_1d(network, 2) network = conv_1d(network, 64, 3, activation='relu', regularizer="L2") network = conv_1d(network, 64, 3, activation='relu', regularizer="L2") network = max_pool_1d(network, 2)
。。。
复制代码

 

posted @   bonelee  阅读(3215)  评论(1编辑  收藏  举报
编辑推荐:
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· DeepSeek 开源周回顾「GitHub 热点速览」
历史上的今天:
2017-03-09 wukong引擎源码分析之索引——part 1 倒排列表本质是有序数组存储
2017-03-09 BDB c++例子,从源码编译到运行
2017-03-09 Linux的nm查看动态和静态库中的符号
点击右上角即可分享
微信分享提示