tensorflow 一些好的blog链接和tensorflow gpu版本安装

pading :SAME,VALID 区别  http://blog.csdn.net/mao_xiao_feng/article/details/53444333

tensorflow实现的各种算法:http://www.cnblogs.com/zhizhan/p/5971423.html

 

卷积神经网络中w*x得到的是一个feature map,然而bias是一个值,也就是每个feature map只对应一个数值的bias(猜测feature map上面的每一个元素都+bias)

 

tensorflow的一些视屏和例子:https://morvanzhou.github.io/tutorials/machine-learning/tensorflow/

 

 

cuda 7.5

安装gpu版本的tensorflow:sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl

 

对于cuda 8.0的版本的gpu网上有很多

 

optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
或者
optimizer = tf.train.AdamOptimizer(self.lr)
self.train_op = slim.learning.create_train_op(self.loss, optimizer)
最后
sess.run(model.train_op, feed_dict_t)

cnn
self.fc = slim.fully_connected(sen_vec, num_class, activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
weights_regularizer=slim.l2_regularizer(0.005),
scope='fc')
with tf.variable_scope('embedding'):
self.w_embed = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0), name='w_embed')
embed = tf.nn.embedding_lookup(self.w_embed, self.x1) # [bs, 57, 128]
self.inputs = tf.nn.dropout(embed, self.dp)
lstmcell=tf.nn.rnn_cell.BasicLSTMCell(hidden_size,forget_bias=0.2,state_is_tuple=True)
lstmcell=tf.nn.rnn_cell.DropoutWrapper(lstmcell,output_keep_prob=self.drop_rate)
lstmcell=tf.nn.rnn_cell.MultiRNNCell([lstmcell]*num_layer,state_is_tuple=True)
 
posted @ 2016-12-04 11:00  simple_wxl  阅读(410)  评论(0编辑  收藏  举报