[Tensorflow] Cookbook - Object Classification based on CIFAR-10

Convolutional Neural Networks (CNNs) are responsible for the major breakthroughs in image recognition made in the past few years. In this chapter we will cover:

  • Implementing a Simpler CNN
  • Implementing an Advanced CNN
  • Retraining Existing CNN models
  • Applying Stylenet/Neural-Style
  • Implementing DeepDream

 

The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million (八千万) tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.

Click here: https://www.cs.toronto.edu/~kriz/cifar.html

Visual dictionary

Click on top of the map to visualize the images in that region of the visual dictionary.

We present a visualization of all the nouns in the English language arranged by semantic meaning. 根据语义可视化名词

Each of the tiles in the mosaic is an arithmetic average of images relating to one of 53,464 nouns. 与五万多个名词之一相关的一个由一大堆图像算数平均后的像素点

The images for each word were obtained using Google's Image Search and other engines. A total of 7,527,697 images were used, each tile being the average of 140 images.

大概就是如此,如下:

 

The CIFAR-10 dataset

The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class.

There are 50000 training images and 10000 test images

Here are the classes in the dataset, as well as 10 random images from each:

airplane
automobile
bird
cat
deer
dog
frog
horse
ship
truck


The classes are completely mutually exclusive. There is no overlap between automobiles and trucks. "Automobile" includes sedans, SUVs, things of that sort. "Truck" includes only big trucks. Neither includes pickup trucks.

This recipe is an adapted version of the official TensorFlow CIFAR-10 tutorial, which is
available under the See also section at the end of this chapter. We have condensed the
tutorial into one script and will go through it line-by-line and explain all the code that is
necessary. We also revert some constants and parameters to the original cited paper values,
which we will point out in the following appropriated steps.

 

前言:使用我的四核i7小笔记本训练了一次数据,CPU满负荷,生怕机器就此挂了。

Starting Training
Generation 50: Loss = 1.99133
Generation 100: Loss = 1.82077
Generation 150: Loss = 2.08245
Generation 200: Loss = 1.60232
Generation 250: Loss = 1.73661
Generation 300: Loss = 1.71986
Generation 350: Loss = 1.45252
Generation 400: Loss = 1.51505
Generation 450: Loss = 1.50190
Generation 500: Loss = 1.43989
 --- Test Accuracy = 49.22%.
Generation 550: Loss = 1.34899
Generation 600: Loss = 1.28325
Generation 650: Loss = 1.45376
Generation 700: Loss = 1.22179
Generation 750: Loss = 1.40790
Generation 800: Loss = 1.23635
Generation 850: Loss = 1.36577
Generation 900: Loss = 1.29193
Generation 950: Loss = 1.16195
Generation 1000: Loss = 1.30807
 --- Test Accuracy = 53.91%.
Generation 1050: Loss = 1.53120
Generation 1100: Loss = 1.19605
Generation 1150: Loss = 1.07220
Generation 1200: Loss = 1.03782
Generation 1250: Loss = 1.22976
Generation 1300: Loss = 0.96371
Generation 1350: Loss = 1.06199
Generation 1400: Loss = 1.04158
Generation 1450: Loss = 1.09863
Generation 1500: Loss = 1.00462
 --- Test Accuracy = 61.72%.
Generation 1550: Loss = 0.93589
Generation 1600: Loss = 0.94716
Generation 1650: Loss = 0.97767
Generation 1700: Loss = 0.89214
Generation 1750: Loss = 0.93194
Generation 1800: Loss = 0.78864
Generation 1850: Loss = 0.79083
Generation 1900: Loss = 1.16496
Generation 1950: Loss = 0.95690
Generation 2000: Loss = 0.71276
 --- Test Accuracy = 64.84%.
Generation 2050: Loss = 0.90579
Generation 2100: Loss = 0.82735
Generation 2150: Loss = 0.89798
Generation 2200: Loss = 0.90343
Generation 2250: Loss = 0.83713
Generation 2300: Loss = 0.85635
Generation 2350: Loss = 0.83437
Generation 2400: Loss = 0.83430
Generation 2450: Loss = 0.87104
Generation 2500: Loss = 0.84299
 --- Test Accuracy = 74.22%.
Generation 2550: Loss = 0.79991
Generation 2600: Loss = 0.80746
Generation 2650: Loss = 0.82027
Generation 2700: Loss = 0.84372
Generation 2750: Loss = 0.81977
Generation 2800: Loss = 0.77057
Generation 2850: Loss = 0.75629
Generation 2900: Loss = 0.82681
Generation 2950: Loss = 0.88289
Generation 3000: Loss = 0.94536
 --- Test Accuracy = 71.09%.
Generation 3050: Loss = 0.76870
Generation 3100: Loss = 0.80715
Generation 3150: Loss = 0.80056
Generation 3200: Loss = 0.78387
Generation 3250: Loss = 0.59328
Generation 3300: Loss = 0.84897
Generation 3350: Loss = 0.67461
Generation 3400: Loss = 0.64628
Generation 3450: Loss = 0.64160
Generation 3500: Loss = 0.63691
 --- Test Accuracy = 70.31%.
Generation 3550: Loss = 0.63177
Generation 3600: Loss = 0.74349
Generation 3650: Loss = 0.64307
Generation 3700: Loss = 0.61021
Generation 3750: Loss = 0.64688
Generation 3800: Loss = 0.63159
Generation 3850: Loss = 0.78472
Generation 3900: Loss = 0.75076
Generation 3950: Loss = 0.53717
Generation 4000: Loss = 0.46514
 --- Test Accuracy = 65.62%.
Generation 4050: Loss = 0.68460
Generation 4100: Loss = 0.58425
Generation 4150: Loss = 0.47215
Generation 4200: Loss = 0.58976
Generation 4250: Loss = 0.64681
Generation 4300: Loss = 0.77239
Generation 4350: Loss = 0.58956
Generation 4400: Loss = 0.70569
Generation 4450: Loss = 0.66185
Generation 4500: Loss = 0.46662
 --- Test Accuracy = 76.56%.
Generation 4550: Loss = 0.49475
Generation 4600: Loss = 0.54739
Generation 4650: Loss = 0.52838
Generation 4700: Loss = 0.81228
Generation 4750: Loss = 0.49100
Generation 4800: Loss = 0.51341
Generation 4850: Loss = 0.47875
Generation 4900: Loss = 0.37848
Generation 4950: Loss = 0.52750
Generation 5000: Loss = 0.53570
 --- Test Accuracy = 63.28%.
Generation 5050: Loss = 0.63138
Generation 5100: Loss = 0.49153
Generation 5150: Loss = 0.54037
Generation 5200: Loss = 0.72630
Generation 5250: Loss = 0.44166
Generation 5300: Loss = 0.51812
Generation 5350: Loss = 0.51912
Generation 5400: Loss = 0.54622
Generation 5450: Loss = 0.41648
Generation 5500: Loss = 0.57976
 --- Test Accuracy = 71.88%.
Generation 5550: Loss = 0.55666
Generation 5600: Loss = 0.44564
Generation 5650: Loss = 0.46812
... 未完,但担心机器挂了
部分日志

 

1. 初始化.

# More Advanced CNN Model: CIFAR-10
#---------------------------------------
#
# In this example, we will download the CIFAR-10 images
# and build a CNN model with dropout and regularization
#
# CIFAR is composed ot 50k train and 10k test
# images that are 32x32.

import os
import tarfile
import matplotlib.pyplot as plt
import tensorflow as tf
from six.moves import urllib
from tensorflow.python.framework import ops
ops.reset_default_graph()

# Change Directory
abspath = os.path.abspath(__file__)
dname = os.path.dirname(abspath)
os.chdir(dname)

# Start a graph session
sess = tf.Session()
View Code

2. 参数设置.

# Set model parameters
batch_size   = 128
data_dir     = 'temp'
output_every = 50
generations  = 20000
eval_every   = 500
image_height = 32
image_width  = 32
crop_height  = 24
crop_width   = 24
num_channels = 3
num_targets  = 10
extract_folder = 'cifar-10-batches-bin'

# Exponential Learning Rate Decay Params
learning_rate    = 0.1
lr_decay         = 0.1
num_gens_to_wait = 250.

# Extract model parameters
image_vec_length = image_height * image_width * num_channels
record_length    = 1 + image_vec_length # ( + 1 for the 0-9 label)

3. Create folder then load data. 

# Create folder then load data
data_dir = 'temp'
if not os.path.exists(data_dir):
    os.makedirs(data_dir)
cifar10_url = 'http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz'


# Check if file exists, otherwise download it
data_file = os.path.join(data_dir, 'cifar-10-binary.tar.gz')
if os.path.isfile(data_file):
    pass
else:
    # Download file
    def progress(block_num, block_size, total_size):
        progress_info = [cifar10_url, float(block_num * block_size) / float(total_size) * 100.0]
        print('\r Downloading {} - {:.2f}%'.format(*progress_info), end="")
    filepath, _ = urllib.request.urlretrieve(cifar10_url, data_file, progress)
    # Extract file
    tarfile.open(filepath, 'r:gz').extractall(data_dir)
View Code

4. Get data then declare model.

# Get data
print('Getting/Transforming Data.')
# Initialize the data pipeline
images, targets = input_pipeline(batch_size, train_logical=True)  // --> 4.1
# Get batch test images and targets from pipline test_images, test_targets = input_pipeline(batch_size, train_logical=False) # Declare Model print('Creating the CIFAR10 Model.') with tf.variable_scope('model_definition') as scope: # Declare the training network model model_output = cifar_cnn_model(images, batch_size) # This is very important!!! We must set the scope to REUSE the variables, # Otherwise, when we set the test network model, it will create new random variables. # Otherwise we get random evaluations on the test batches. scope.reuse_variables() test_output = cifar_cnn_model(test_images, batch_size)

4.1 序列加载 - input_pipeline

# Create a CIFAR image pipeline from reader
def input_pipeline(batch_size, train_logical=True):
    if train_logical:
        files = [os.path.join(data_dir, extract_folder, 'data_batch_{}.bin'.format(i)) for i in range(1,6)]
    else:
        files = [os.path.join(data_dir, extract_folder, 'test_batch.bin')]
filename_queue
= tf.train.string_input_producer(files) image, label = read_cifar_files(filename_queue)  // --> 4.2 # min_after_dequeue defines how big a buffer we will randomly sample # from -- bigger means better shuffling but slower start up and more # memory used. # capacity must be larger than min_after_dequeue and the amount larger # determines the maximum we will prefetch. Recommendation: # min_after_dequeue + (num_threads + a small safety margin) * batch_size min_after_dequeue = 5000 capacity = min_after_dequeue + 3 * batch_size example_batch, label_batch = tf.train.shuffle_batch([image, label], batch_size=batch_size, capacity=capacity, min_after_dequeue=min_after_dequeue) return(example_batch, label_batch)

4.2 图片预处理 - read_cifar_files

对图片样本数据做了一些预处理工作。 

# Define CIFAR reader
def read_cifar_files(filename_queue, distort_images = True):
reader
= tf.FixedLengthRecordReader(record_bytes=record_length) key, record_string = reader.read(filename_queue) record_bytes = tf.decode_raw(record_string, tf.uint8) image_label = tf.cast(tf.slice(record_bytes, [0], [1]), tf.int32) # Extract image image_extracted = tf.reshape(tf.slice(record_bytes, [1], [image_vec_length]), [num_channels, image_height, image_width]) # Reshape image image_uint8image = tf.transpose(image_extracted, [1, 2, 0]) reshaped_image = tf.cast(image_uint8image, tf.float32) # Randomly Crop image final_image = tf.image.resize_image_with_crop_or_pad(reshaped_image, crop_width, crop_height)
# 一定的预处理步骤
if distort_images: # Randomly flip the image horizontally, change the brightness and contrast final_image = tf.image.random_flip_left_right(final_image) final_image = tf.image.random_brightness(final_image,max_delta=63) final_image = tf.image.random_contrast(final_image,lower=0.2, upper=1.8) # Normalize whitening final_image = tf.image.per_image_standardization(final_image) return(final_image, image_label)

4.3 定义模型 

Next, we can declare our model function.

The model we will use has 2 convolutional layers, followed by 3 fully connected layers.

To make variable declaration easier, we'll start by declaring two variable functions. (ori: 32x32 --> 8x8)

  • The two convolutional layers will create 64 features each.
  • The 1st fully connected layer will connect the 2nd convolutional layer with 384 hidden nodes.
  • The 2nd fully connected operation will connect those 384 hidden nodes to 192 hidden nodes.   
  • The final hidden layer operation will then connect the 192 nodes to the 10 output classes we are trying to predict.

【input --> 1st conv + pooling --> 2nd conv + pooling --> fully connected --> 384 -- fully connected --> 192 -- fully connected --> 10】

# Define the model architecture, this will return logits from images
def cifar_cnn_model(input_images, batch_size, train_logical=True):
def truncated_normal_var(name, shape, dtype): return(tf.get_variable(name=name, shape=shape, dtype=dtype, initializer=tf.truncated_normal_initializer(stddev=0.05))) def zero_var(name, shape, dtype): return(tf.get_variable(name=name, shape=shape, dtype=dtype, initializer=tf.constant_initializer(0.0))) # First Convolutional Layer with tf.variable_scope('conv1') as scope: # Conv_kernel is 5x5 for all 3 colors and we will create 64 features conv1_kernel = truncated_normal_var(name='conv_kernel1', shape=[5, 5, 3, 64], dtype=tf.float32) # We convolve across the image with a stride size of 1 conv1 = tf.nn.conv2d(input_images, conv1_kernel, [1, 1, 1, 1], padding='SAME') # Initialize and add the bias term conv1_bias = zero_var(name='conv_bias1', shape=[64], dtype=tf.float32) conv1_add_bias = tf.nn.bias_add(conv1, conv1_bias) # ReLU element wise relu_conv1 = tf.nn.relu(conv1_add_bias) # Max Pooling pool1 = tf.nn.max_pool(relu_conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1],padding='SAME', name='pool_layer1') # Local Response Normalization (parameters from paper) # paper: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks norm1 = tf.nn.lrn(pool1, depth_radius=5, bias=2.0, alpha=1e-3, beta=0.75, name='norm1') # Second Convolutional Layer with tf.variable_scope('conv2') as scope: # Conv kernel is 5x5, across all prior 64 features and we create 64 more features conv2_kernel = truncated_normal_var(name='conv_kernel2', shape=[5, 5, 64, 64], dtype=tf.float32) # Convolve filter across prior output with stride size of 1 conv2 = tf.nn.conv2d(norm1, conv2_kernel, [1, 1, 1, 1], padding='SAME') # Initialize and add the bias conv2_bias = zero_var(name='conv_bias2', shape=[64], dtype=tf.float32) conv2_add_bias = tf.nn.bias_add(conv2, conv2_bias) # ReLU element wise relu_conv2 = tf.nn.relu(conv2_add_bias) # Max Pooling pool2 = tf.nn.max_pool(relu_conv2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool_layer2') # Local Response Normalization (parameters from paper) norm2 = tf.nn.lrn(pool2, depth_radius=5, bias=2.0, alpha=1e-3, beta=0.75, name='norm2') # Reshape output into a single matrix for multiplication for the fully connected layers reshaped_output = tf.reshape(norm2, [batch_size, -1]) reshaped_dim = reshaped_output.get_shape()[1].value # First Fully Connected Layer with tf.variable_scope('full1') as scope: # Fully connected layer will have 384 outputs. full_weight1 = truncated_normal_var(name='full_mult1', shape=[reshaped_dim, 384], dtype=tf.float32) full_bias1 = zero_var(name='full_bias1', shape=[384], dtype=tf.float32) full_layer1 = tf.nn.relu(tf.add(tf.matmul(reshaped_output, full_weight1), full_bias1)) # Second Fully Connected Layer with tf.variable_scope('full2') as scope: # Second fully connected layer has 192 outputs. full_weight2 = truncated_normal_var(name='full_mult2', shape=[384, 192], dtype=tf.float32) full_bias2 = zero_var(name='full_bias2', shape=[192], dtype=tf.float32) full_layer2 = tf.nn.relu(tf.add(tf.matmul(full_layer1, full_weight2), full_bias2)) # Final Fully Connected Layer -> 10 categories for output (num_targets) with tf.variable_scope('full3') as scope: # Final fully connected layer has 10 (num_targets) outputs. full_weight3 = truncated_normal_var(name='full_mult3', shape=[192, num_targets], dtype=tf.float32) full_bias3 = zero_var(name='full_bias3', shape=[num_targets], dtype=tf.float32) final_output = tf.add(tf.matmul(full_layer2, full_weight3), full_bias3) return(final_output)

5. Loss and accuracy.

# Loss function
def cifar_loss(logits, targets):
    # Get rid of extra dimensions and cast targets into integers
    targets = tf.squeeze(tf.cast(targets, tf.int32))
    # Calculate cross entropy from logits and targets
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, targets)
    # Take the average loss across batch size
    cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy')
    return(cross_entropy_mean)


# Accuracy function
def accuracy_of_batch(logits, targets):
    # Make sure targets are integers and drop extra dimensions
    targets = tf.squeeze(tf.cast(targets, tf.int32))
    # Get predicted values by finding which logit is the greatest
    batch_predictions = tf.cast(tf.argmax(logits, 1), tf.int32)
    # Check if they are equal across the batch
    predicted_correctly = tf.equal(batch_predictions, targets)
    # Average the 1's and 0's (True's and False's) across the batch size
    accuracy = tf.reduce_mean(tf.cast(predicted_correctly, tf.float32))
    return(accuracy)


# Train step
def train_step(loss_value, generation_num):
    # Our learning rate is an exponential decay after we wait a fair number of generations
    model_learning_rate = tf.train.exponential_decay(learning_rate, generation_num,
                                                     num_gens_to_wait, lr_decay, staircase=True)
    # Create optimizer
    my_optimizer = tf.train.GradientDescentOptimizer(model_learning_rate)
    # Initialize train step
    train_step = my_optimizer.minimize(loss_value)
    return(train_step)


# Declare loss function
print('Declare Loss Function.')
loss = cifar_loss(model_output, targets)

# Create accuracy function
accuracy = accuracy_of_batch(test_output, test_targets)

# Create training operations
print('Creating the Training Operation.')
generation_num = tf.Variable(0, trainable=False)
train_op = train_step(loss, generation_num)

# Initialize Variables
print('Initializing the Variables.')
init = tf.initialize_all_variables()
sess.run(init)

# Initialize queue (This queue will feed into the model, so no placeholders necessary)
tf.train.start_queue_runners(sess=sess)

6. Training.

# Train CIFAR Model
print('Starting Training')
train_loss = []
test_accuracy = []
for i in range(generations):
    _, loss_value = sess.run([train_op, loss])
    
    if (i+1) % output_every == 0:
        train_loss.append(loss_value)
        output = 'Generation {}: Loss = {:.5f}'.format((i+1), loss_value)
        print(output)
    
    if (i+1) % eval_every == 0:
        [temp_accuracy] = sess.run([accuracy])
        test_accuracy.append(temp_accuracy)
        acc_output = ' --- Test Accuracy = {:.2f}%.'.format(100.*temp_accuracy)
        print(acc_output)

# Print loss and accuracy
# Matlotlib code to plot the loss and accuracies
eval_indices   = range(0, generations, eval_every)
output_indices = range(0, generations, output_every)

# Plot loss over time
plt.plot(output_indices, train_loss, 'k-')
plt.title('Softmax Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Softmax Loss')
plt.show()

# Plot accuracy over time
plt.plot(eval_indices, test_accuracy, 'k-')
plt.title('Test Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.show()

 

# More Advanced CNN Model: CIFAR-10
#---------------------------------------
#
# In this example, we will download the CIFAR-10 images
# and build a CNN model with dropout and regularization
#
# CIFAR is composed ot 50k train and 10k test
# images that are 32x32.

import os
import tarfile
import matplotlib.pyplot as plt
import tensorflow as tf
from six.moves import urllib
from tensorflow.python.framework import ops
ops.reset_default_graph()

# Change Directory
abspath = os.path.abspath(__file__)
dname = os.path.dirname(abspath)
os.chdir(dname)

# Start a graph session
sess = tf.Session()

# Set model parameters
batch_size   = 128
data_dir     = 'temp'
output_every = 50
generations  = 20000
eval_every   = 500
image_height = 32
image_width  = 32
crop_height  = 24
crop_width   = 24
num_channels = 3
num_targets  = 10
extract_folder = 'cifar-10-batches-bin'

# Exponential Learning Rate Decay Params
learning_rate    = 0.1
lr_decay         = 0.1
num_gens_to_wait = 250.

# Extract model parameters
image_vec_length = image_height * image_width * num_channels
record_length    = 1 + image_vec_length # ( + 1 for the 0-9 label)

# Create folder then load data
data_dir = 'temp'
if not os.path.exists(data_dir):
    os.makedirs(data_dir)
cifar10_url = 'http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz'


# Check if file exists, otherwise download it
data_file = os.path.join(data_dir, 'cifar-10-binary.tar.gz')
if os.path.isfile(data_file):
    pass
else:
    # Download file
    def progress(block_num, block_size, total_size):
        progress_info = [cifar10_url, float(block_num * block_size) / float(total_size) * 100.0]
        print('\r Downloading {} - {:.2f}%'.format(*progress_info), end="")
    filepath, _ = urllib.request.urlretrieve(cifar10_url, data_file, progress)
    # Extract file
    tarfile.open(filepath, 'r:gz').extractall(data_dir)
    

# Define CIFAR reader
def read_cifar_files(filename_queue, distort_images = True):
    reader = tf.FixedLengthRecordReader(record_bytes=record_length)
    key, record_string = reader.read(filename_queue)
    record_bytes = tf.decode_raw(record_string, tf.uint8)
    image_label  = tf.cast(tf.slice(record_bytes, [0], [1]), tf.int32)
  
    # Extract image
    image_extracted = tf.reshape(tf.slice(record_bytes, [1], [image_vec_length]),
                                 [num_channels, image_height, image_width])
    
    # Reshape image
    image_uint8image = tf.transpose(image_extracted, [1, 2, 0])
    reshaped_image   = tf.cast(image_uint8image, tf.float32)
    # Randomly Crop image
    final_image = tf.image.resize_image_with_crop_or_pad(reshaped_image, crop_width, crop_height)
    
    if distort_images:
        # Randomly flip the image horizontally, change the brightness and contrast
        final_image = tf.image.random_flip_left_right(final_image)
        final_image = tf.image.random_brightness(final_image,max_delta=63)
        final_image = tf.image.random_contrast(final_image,lower=0.2, upper=1.8)

    # Normalize whitening
    final_image = tf.image.per_image_standardization(final_image)
    return(final_image, image_label)


# Create a CIFAR image pipeline from reader
def input_pipeline(batch_size, train_logical=True):
    if train_logical:
        files = [os.path.join(data_dir, extract_folder, 'data_batch_{}.bin'.format(i)) for i in range(1,6)]
    else:
        files = [os.path.join(data_dir, extract_folder, 'test_batch.bin')]
                 
    filename_queue = tf.train.string_input_producer(files)
    image, label   = read_cifar_files(filename_queue)
    
    # min_after_dequeue defines how big a buffer we will randomly sample
    #   from -- bigger means better shuffling but slower start up and more
    #   memory used.
    # capacity must be larger than min_after_dequeue and the amount larger
    #   determines the maximum we will prefetch.  Recommendation:
    #   min_after_dequeue + (num_threads + a small safety margin) * batch_size
    min_after_dequeue = 5000
    capacity = min_after_dequeue + 3 * batch_size
    example_batch, label_batch = tf.train.shuffle_batch([image, label],
                                                        batch_size=batch_size,
                                                        capacity=capacity,
                                                        min_after_dequeue=min_after_dequeue)

    return(example_batch, label_batch)

    
# Define the model architecture, this will return logits from images
def cifar_cnn_model(input_images, batch_size, train_logical=True):
    def truncated_normal_var(name, shape, dtype):
        return(tf.get_variable(name=name, shape=shape, dtype=dtype, initializer=tf.truncated_normal_initializer(stddev=0.05)))
    def zero_var(name, shape, dtype):
        return(tf.get_variable(name=name, shape=shape, dtype=dtype, initializer=tf.constant_initializer(0.0)))
    
    # First Convolutional Layer
    with tf.variable_scope('conv1') as scope:
        # Conv_kernel is 5x5 for all 3 colors and we will create 64 features
        conv1_kernel = truncated_normal_var(name='conv_kernel1', shape=[5, 5, 3, 64], dtype=tf.float32)
        # We convolve across the image with a stride size of 1
        conv1 = tf.nn.conv2d(input_images, conv1_kernel, [1, 1, 1, 1], padding='SAME')
        # Initialize and add the bias term
        conv1_bias = zero_var(name='conv_bias1', shape=[64], dtype=tf.float32)
        conv1_add_bias = tf.nn.bias_add(conv1, conv1_bias)
        # ReLU element wise
        relu_conv1 = tf.nn.relu(conv1_add_bias)
    
    # Max Pooling
    pool1 = tf.nn.max_pool(relu_conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1],padding='SAME', name='pool_layer1')
    
    # Local Response Normalization (parameters from paper)
    # paper: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks
    norm1 = tf.nn.lrn(pool1, depth_radius=5, bias=2.0, alpha=1e-3, beta=0.75, name='norm1')

    # Second Convolutional Layer
    with tf.variable_scope('conv2') as scope:
        # Conv kernel is 5x5, across all prior 64 features and we create 64 more features
        conv2_kernel = truncated_normal_var(name='conv_kernel2', shape=[5, 5, 64, 64], dtype=tf.float32)
        # Convolve filter across prior output with stride size of 1
        conv2 = tf.nn.conv2d(norm1, conv2_kernel, [1, 1, 1, 1], padding='SAME')
        # Initialize and add the bias
        conv2_bias = zero_var(name='conv_bias2', shape=[64], dtype=tf.float32)
        conv2_add_bias = tf.nn.bias_add(conv2, conv2_bias)
        # ReLU element wise
        relu_conv2 = tf.nn.relu(conv2_add_bias)
    
    # Max Pooling
    pool2 = tf.nn.max_pool(relu_conv2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool_layer2')    
    
     # Local Response Normalization (parameters from paper)
    norm2 = tf.nn.lrn(pool2, depth_radius=5, bias=2.0, alpha=1e-3, beta=0.75, name='norm2')
    
    # Reshape output into a single matrix for multiplication for the fully connected layers
    reshaped_output = tf.reshape(norm2, [batch_size, -1])
    reshaped_dim = reshaped_output.get_shape()[1].value
    
    # First Fully Connected Layer
    with tf.variable_scope('full1') as scope:
        # Fully connected layer will have 384 outputs.
        full_weight1 = truncated_normal_var(name='full_mult1', shape=[reshaped_dim, 384], dtype=tf.float32)
        full_bias1 = zero_var(name='full_bias1', shape=[384], dtype=tf.float32)
        full_layer1 = tf.nn.relu(tf.add(tf.matmul(reshaped_output, full_weight1), full_bias1))

    # Second Fully Connected Layer
    with tf.variable_scope('full2') as scope:
        # Second fully connected layer has 192 outputs.
        full_weight2 = truncated_normal_var(name='full_mult2', shape=[384, 192], dtype=tf.float32)
        full_bias2 = zero_var(name='full_bias2', shape=[192], dtype=tf.float32)
        full_layer2 = tf.nn.relu(tf.add(tf.matmul(full_layer1, full_weight2), full_bias2))

    # Final Fully Connected Layer -> 10 categories for output (num_targets)
    with tf.variable_scope('full3') as scope:
        # Final fully connected layer has 10 (num_targets) outputs.
        full_weight3 = truncated_normal_var(name='full_mult3', shape=[192, num_targets], dtype=tf.float32)
        full_bias3 =  zero_var(name='full_bias3', shape=[num_targets], dtype=tf.float32)
        final_output = tf.add(tf.matmul(full_layer2, full_weight3), full_bias3)
        
    return(final_output)


# Loss function
def cifar_loss(logits, targets):
    # Get rid of extra dimensions and cast targets into integers
    targets = tf.squeeze(tf.cast(targets, tf.int32))
    # Calculate cross entropy from logits and targets
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, targets)
    # Take the average loss across batch size
    cross_entropy_mean = tf.reduce_mean(cross_entropy, name='cross_entropy')
    return(cross_entropy_mean)


# Train step
def train_step(loss_value, generation_num):
    # Our learning rate is an exponential decay after we wait a fair number of generations
    model_learning_rate = tf.train.exponential_decay(learning_rate, generation_num,
                                                     num_gens_to_wait, lr_decay, staircase=True)
    # Create optimizer
    my_optimizer = tf.train.GradientDescentOptimizer(model_learning_rate)
    # Initialize train step
    train_step = my_optimizer.minimize(loss_value)
    return(train_step)


# Accuracy function
def accuracy_of_batch(logits, targets):
    # Make sure targets are integers and drop extra dimensions
    targets = tf.squeeze(tf.cast(targets, tf.int32))
    # Get predicted values by finding which logit is the greatest
    batch_predictions = tf.cast(tf.argmax(logits, 1), tf.int32)
    # Check if they are equal across the batch
    predicted_correctly = tf.equal(batch_predictions, targets)
    # Average the 1's and 0's (True's and False's) across the batch size
    accuracy = tf.reduce_mean(tf.cast(predicted_correctly, tf.float32))
    return(accuracy)

# Get data
print('Getting/Transforming Data.')
# Initialize the data pipeline
images, targets = input_pipeline(batch_size, train_logical=True)
# Get batch test images and targets from pipline
test_images, test_targets = input_pipeline(batch_size, train_logical=False)

# Declare Model
print('Creating the CIFAR10 Model.')
with tf.variable_scope('model_definition') as scope:
    # Declare the training network model
    model_output = cifar_cnn_model(images, batch_size)
    # This is very important!!!  We must set the scope to REUSE the variables,
    #  otherwise, when we set the test network model, it will create new random
    #  variables.  Otherwise we get random evaluations on the test batches.
    scope.reuse_variables()
    test_output = cifar_cnn_model(test_images, batch_size)

# Declare loss function
print('Declare Loss Function.')
loss = cifar_loss(model_output, targets)

# Create accuracy function
accuracy = accuracy_of_batch(test_output, test_targets)

# Create training operations
print('Creating the Training Operation.')
generation_num = tf.Variable(0, trainable=False)
train_op = train_step(loss, generation_num)

# Initialize Variables
print('Initializing the Variables.')
init = tf.initialize_all_variables()
sess.run(init)

# Initialize queue (This queue will feed into the model, so no placeholders necessary)
tf.train.start_queue_runners(sess=sess)

# Train CIFAR Model
print('Starting Training')
train_loss = []
test_accuracy = []
for i in range(generations):
    _, loss_value = sess.run([train_op, loss])
    
    if (i+1) % output_every == 0:
        train_loss.append(loss_value)
        output = 'Generation {}: Loss = {:.5f}'.format((i+1), loss_value)
        print(output)
    
    if (i+1) % eval_every == 0:
        [temp_accuracy] = sess.run([accuracy])
        test_accuracy.append(temp_accuracy)
        acc_output = ' --- Test Accuracy = {:.2f}%.'.format(100.*temp_accuracy)
        print(acc_output)

# Print loss and accuracy
# Matlotlib code to plot the loss and accuracies
eval_indices = range(0, generations, eval_every)
output_indices = range(0, generations, output_every)

# Plot loss over time
plt.plot(output_indices, train_loss, 'k-')
plt.title('Softmax Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Softmax Loss')
plt.show()

# Plot accuracy over time
plt.plot(eval_indices, test_accuracy, 'k-')
plt.title('Test Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.show()
完整代码
# student @ user-ubuntu in ~/work/jeff/cnn_cifar [3:43:05] 
$ python cnn_cifar10.py

2017-07-27 03:43:18.521404: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-27 03:43:18.521432: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-27 03:43:18.521439: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-07-27 03:43:18.521461: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-07-27 03:43:18.521482: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-07-27 03:43:18.971638: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: 
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.582
pciBusID 0000:82:00.0
Total memory: 10.91GiB
Free memory: 10.75GiB
2017-07-27 03:43:19.404592: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x2b7d1e0 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2017-07-27 03:43:19.406002: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 1 with properties: 
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.582
pciBusID 0000:83:00.0
Total memory: 10.91GiB
Free memory: 10.75GiB
2017-07-27 03:43:19.407105: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 1 
2017-07-27 03:43:19.407122: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   Y Y 
2017-07-27 03:43:19.407127: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 1:   Y Y 
2017-07-27 03:43:19.407143: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:82:00.0)
2017-07-27 03:43:19.407150: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:83:00.0)
Getting/Transforming Data.
Creating the CIFAR10 Model.
Declare Loss Function.
Creating the Training Operation.
Initializing the Variables.
WARNING:tensorflow:From /home/sean/virtualenv/tensorflow-py3.5/lib/python3.5/site-packages/tensorflow/python/util/tf_should_use.py:170: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
Starting Training

Generation 50 : Loss = 1.98874
Generation 100: Loss = 1.89331
Generation 150: Loss = 2.14434
Generation 200: Loss = 1.62269
Generation 250: Loss = 1.73568
Generation 300: Loss = 1.67207
Generation 350: Loss = 1.47158
Generation 400: Loss = 1.70603
Generation 450: Loss = 1.70473
Generation 500: Loss = 1.54680
 --- Test Accuracy = 50.00%.
Generation 550: Loss = 1.47264
Generation 600: Loss = 1.33211
Generation 650: Loss = 1.28465
Generation 700: Loss = 1.47795
Generation 750: Loss = 1.54336
Generation 800: Loss = 1.32847
Generation 850: Loss = 1.30343
Generation 900: Loss = 1.21732
Generation 950: Loss = 1.38395
Generation 1000: Loss = 1.18652
 --- Test Accuracy = 58.59%.
Generation 1050: Loss = 1.18199
Generation 1100: Loss = 1.26724
Generation 1150: Loss = 1.18451
Generation 1200: Loss = 1.38382
Generation 1250: Loss = 1.10484
Generation 1300: Loss = 1.22248
Generation 1350: Loss = 1.03363
Generation 1400: Loss = 1.19343
Generation 1450: Loss = 1.21833
Generation 1500: Loss = 1.28556
 --- Test Accuracy = 64.06%.
Generation 1550: Loss = 1.05083
Generation 1600: Loss = 1.12365
Generation 1650: Loss = 0.98373
Generation 1700: Loss = 0.98653
Generation 1750: Loss = 0.89979
Generation 1800: Loss = 1.15915
Generation 1850: Loss = 0.97605
Generation 1900: Loss = 0.98232
Generation 1950: Loss = 0.93118
Generation 2000: Loss = 0.84955
 --- Test Accuracy = 71.88%.
Generation 2050: Loss = 1.01471
Generation 2100: Loss = 1.03468
Generation 2150: Loss = 1.03076
Generation 2200: Loss = 1.12238
Generation 2250: Loss = 0.72818
Generation 2300: Loss = 0.92140
Generation 2350: Loss = 0.96073
Generation 2400: Loss = 0.90507
Generation 2450: Loss = 0.84059
Generation 2500: Loss = 1.00707
 --- Test Accuracy = 64.84%.
Generation 2550: Loss = 0.77818
Generation 2600: Loss = 0.91769
Generation 2650: Loss = 0.83464
Generation 2700: Loss = 0.82919
Generation 2750: Loss = 0.71690
Generation 2800: Loss = 0.69588
Generation 2850: Loss = 0.81282
Generation 2900: Loss = 0.86472
Generation 2950: Loss = 0.68893
Generation 3000: Loss = 0.68487
 --- Test Accuracy = 66.41%.
Generation 3050: Loss = 0.77830
Generation 3100: Loss = 0.98387
Generation 3150: Loss = 0.67766
Generation 3200: Loss = 0.83822
Generation 3250: Loss = 0.72022
Generation 3300: Loss = 0.61615
Generation 3350: Loss = 0.80152
Generation 3400: Loss = 0.61803
Generation 3450: Loss = 0.69104
Generation 3500: Loss = 0.86474
 --- Test Accuracy = 71.09%.
Generation 3550: Loss = 0.56136
Generation 3600: Loss = 0.83764
Generation 3650: Loss = 0.75091
Generation 3700: Loss = 0.57823
Generation 3750: Loss = 0.52850
Generation 3800: Loss = 0.53191
Generation 3850: Loss = 0.65577
Generation 3900: Loss = 0.70614
Generation 3950: Loss = 0.57539
Generation 4000: Loss = 0.61946
 --- Test Accuracy = 71.09%.
Generation 4050: Loss = 0.50912
Generation 4100: Loss = 0.59709
Generation 4150: Loss = 0.63275
Generation 4200: Loss = 0.73160
Generation 4250: Loss = 0.69023
Generation 4300: Loss = 0.68340
Generation 4350: Loss = 0.54651
Generation 4400: Loss = 0.66809
Generation 4450: Loss = 0.54778
Generation 4500: Loss = 0.49987
 --- Test Accuracy = 66.41%.
Generation 4550: Loss = 0.52780
Generation 4600: Loss = 0.47527
Generation 4650: Loss = 0.56457
Generation 4700: Loss = 0.49000
Generation 4750: Loss = 0.62392
Generation 4800: Loss = 0.53709
Generation 4850: Loss = 0.46020
Generation 4900: Loss = 0.58521
Generation 4950: Loss = 0.52085
Generation 5000: Loss = 0.56563
 --- Test Accuracy = 76.56%.
Generation 5050: Loss = 0.62101
Generation 5100: Loss = 0.68806
Generation 5150: Loss = 0.56646
Generation 5200: Loss = 0.55054
Generation 5250: Loss = 0.70789
Generation 5300: Loss = 0.48409
Generation 5350: Loss = 0.48703
Generation 5400: Loss = 0.53602
Generation 5450: Loss = 0.53750
Generation 5500: Loss = 0.44592
 --- Test Accuracy = 71.88%.
Generation 5550: Loss = 0.61261
Generation 5600: Loss = 0.57290
Generation 5650: Loss = 0.52776
Generation 5700: Loss = 0.49262
Generation 5750: Loss = 0.44058
Generation 5800: Loss = 0.62443
Generation 5850: Loss = 0.38249
Generation 5900: Loss = 0.39162
Generation 5950: Loss = 0.49900
Generation 6000: Loss = 0.60641
 --- Test Accuracy = 73.44%.
Generation 6050: Loss = 0.52156
Generation 6100: Loss = 0.50984
Generation 6150: Loss = 0.62414
Generation 6200: Loss = 0.56085
Generation 6250: Loss = 0.45930
Generation 6300: Loss = 0.41330
Generation 6350: Loss = 0.46615
Generation 6400: Loss = 0.48824
Generation 6450: Loss = 0.61569
Generation 6500: Loss = 0.54841
 --- Test Accuracy = 72.66%.
Generation 6550: Loss = 0.45108
Generation 6600: Loss = 0.36974
Generation 6650: Loss = 0.42269
Generation 6700: Loss = 0.31257
Generation 6750: Loss = 0.39991
Generation 6800: Loss = 0.34907
Generation 6850: Loss = 0.34459
Generation 6900: Loss = 0.39457
Generation 6950: Loss = 0.29138
Generation 7000: Loss = 0.40070
 --- Test Accuracy = 71.09%.
Generation 7050: Loss = 0.31275
Generation 7100: Loss = 0.37386
Generation 7150: Loss = 0.57231
Generation 7200: Loss = 0.33384
Generation 7250: Loss = 0.39317
Generation 7300: Loss = 0.27306
Generation 7350: Loss = 0.42451
Generation 7400: Loss = 0.44812
Generation 7450: Loss = 0.40212
Generation 7500: Loss = 0.33456
 --- Test Accuracy = 71.88%.
Generation 7550: Loss = 0.33886
Generation 7600: Loss = 0.35627
Generation 7650: Loss = 0.37291
Generation 7700: Loss = 0.36350
Generation 7750: Loss = 0.61906
Generation 7800: Loss = 0.44072
Generation 7850: Loss = 0.53827
Generation 7900: Loss = 0.40603
Generation 7950: Loss = 0.34712
Generation 8000: Loss = 0.37340
 --- Test Accuracy = 74.22%.
Generation 8050: Loss = 0.34767
Generation 8100: Loss = 0.27752
Generation 8150: Loss = 0.39700
Generation 8200: Loss = 0.32144
Generation 8250: Loss = 0.29297
Generation 8300: Loss = 0.25248
Generation 8350: Loss = 0.31232
Generation 8400: Loss = 0.37384
Generation 8450: Loss = 0.19988
Generation 8500: Loss = 0.35983
 --- Test Accuracy = 70.31%.
Generation 8550: Loss = 0.24563
Generation 8600: Loss = 0.26254
Generation 8650: Loss = 0.28202
Generation 8700: Loss = 0.38843
Generation 8750: Loss = 0.36233
Generation 8800: Loss = 0.18249
Generation 8850: Loss = 0.28049
Generation 8900: Loss = 0.21987
Generation 8950: Loss = 0.27884
Generation 9000: Loss = 0.26830
 --- Test Accuracy = 78.91%.
Generation 9050: Loss = 0.21410
Generation 9100: Loss = 0.18955
Generation 9150: Loss = 0.21886
Generation 9200: Loss = 0.35513
Generation 9250: Loss = 0.23994
Generation 9300: Loss = 0.30040
Generation 9350: Loss = 0.27230
Generation 9400: Loss = 0.24417
Generation 9450: Loss = 0.23737
Generation 9500: Loss = 0.24323
 --- Test Accuracy = 71.88%.
Generation 9550: Loss = 0.28200
Generation 9600: Loss = 0.37996
Generation 9650: Loss = 0.14036
Generation 9700: Loss = 0.25095
Generation 9750: Loss = 0.24847
Generation 9800: Loss = 0.31754
Generation 9850: Loss = 0.17151
Generation 9900: Loss = 0.19960
Generation 9950: Loss = 0.24201
Generation 10000: Loss = 0.25191
 --- Test Accuracy = 71.88%.
Generation 10050: Loss = 0.21244
Generation 10100: Loss = 0.31911
Generation 10150: Loss = 0.25067
Generation 10200: Loss = 0.17353
Generation 10250: Loss = 0.17035
Generation 10300: Loss = 0.20111
Generation 10350: Loss = 0.24000
Generation 10400: Loss = 0.28682
Generation 10450: Loss = 0.27803
Generation 10500: Loss = 0.22228
 --- Test Accuracy = 74.22%.
Generation 10550: Loss = 0.20168
Generation 10600: Loss = 0.18150
Generation 10650: Loss = 0.12649
Generation 10700: Loss = 0.21024
Generation 10750: Loss = 0.13210
Generation 10800: Loss = 0.21463
Generation 10850: Loss = 0.19228
Generation 10900: Loss = 0.20855
Generation 10950: Loss = 0.09159
Generation 11000: Loss = 0.19000
 --- Test Accuracy = 71.09%.
Generation 11050: Loss = 0.16792
Generation 11100: Loss = 0.18264
Generation 11150: Loss = 0.20756
Generation 11200: Loss = 0.23574
Generation 11250: Loss = 0.25095
Generation 11300: Loss = 0.19270
Generation 11350: Loss = 0.19303
Generation 11400: Loss = 0.16534
Generation 11450: Loss = 0.29888
Generation 11500: Loss = 0.17793
 --- Test Accuracy = 78.91%.
Generation 11550: Loss = 0.15598
Generation 11600: Loss = 0.12160
Generation 11650: Loss = 0.26322
Generation 11700: Loss = 0.10899
Generation 11750: Loss = 0.11561
Generation 11800: Loss = 0.16404
Generation 11850: Loss = 0.18666
Generation 11900: Loss = 0.15152
Generation 11950: Loss = 0.22033
Generation 12000: Loss = 0.17022
 --- Test Accuracy = 77.34%.
Generation 12050: Loss = 0.06982
Generation 12100: Loss = 0.11614
Generation 12150: Loss = 0.22383
Generation 12200: Loss = 0.14770
Generation 12250: Loss = 0.12691
Generation 12300: Loss = 0.13115
Generation 12350: Loss = 0.15366
Generation 12400: Loss = 0.10993
Generation 12450: Loss = 0.12453
Generation 12500: Loss = 0.11822
 --- Test Accuracy = 68.75%.
Generation 12550: Loss = 0.08440
Generation 12600: Loss = 0.10500
Generation 12650: Loss = 0.09079
Generation 12700: Loss = 0.17050
Generation 12750: Loss = 0.16910
Generation 12800: Loss = 0.16500
Generation 12850: Loss = 0.10901
Generation 12900: Loss = 0.06830
Generation 12950: Loss = 0.06736
Generation 13000: Loss = 0.15788
 --- Test Accuracy = 75.00%.
Generation 13050: Loss = 0.13596
Generation 13100: Loss = 0.11368
Generation 13150: Loss = 0.15130
Generation 13200: Loss = 0.16115
Generation 13250: Loss = 0.08005
Generation 13300: Loss = 0.37412
Generation 13350: Loss = 0.08087
Generation 13400: Loss = 0.05354
Generation 13450: Loss = 0.14977
Generation 13500: Loss = 0.06454
 --- Test Accuracy = 76.56%.
Generation 13550: Loss = 0.10611
Generation 13600: Loss = 0.14358
Generation 13650: Loss = 0.30438
Generation 13700: Loss = 0.12326
Generation 13750: Loss = 0.12546
Generation 13800: Loss = 0.05507
Generation 13850: Loss = 0.10522
Generation 13900: Loss = 0.14672
Generation 13950: Loss = 0.08316
Generation 14000: Loss = 0.04716
 --- Test Accuracy = 81.25%.
Generation 14050: Loss = 0.05569
Generation 14100: Loss = 0.07380
Generation 14150: Loss = 0.08503
Generation 14200: Loss = 0.09180
Generation 14250: Loss = 0.12643
Generation 14300: Loss = 0.28318
Generation 14350: Loss = 0.10349
Generation 14400: Loss = 0.08242
Generation 14450: Loss = 0.14991
Generation 14500: Loss = 0.09186
 --- Test Accuracy = 68.75%.
Generation 14550: Loss = 0.05001
Generation 14600: Loss = 0.09397
Generation 14650: Loss = 0.15350
Generation 14700: Loss = 0.05397
Generation 14750: Loss = 0.07973
Generation 14800: Loss = 0.04911
Generation 14850: Loss = 0.14983
Generation 14900: Loss = 0.04087
Generation 14950: Loss = 0.09777
Generation 15000: Loss = 0.06766
 --- Test Accuracy = 79.69%.
Generation 15050: Loss = 0.08833
Generation 15100: Loss = 0.05564
Generation 15150: Loss = 0.14910
Generation 15200: Loss = 0.09593
Generation 15250: Loss = 0.02290
Generation 15300: Loss = 0.09136
Generation 15350: Loss = 0.05094
Generation 15400: Loss = 0.06671
Generation 15450: Loss = 0.06932
Generation 15500: Loss = 0.05988
 --- Test Accuracy = 71.09%.
Generation 15550: Loss = 0.03620
Generation 15600: Loss = 0.11110
Generation 15650: Loss = 0.09207
Generation 15700: Loss = 0.12269
Generation 15750: Loss = 0.07687
Generation 15800: Loss = 0.13080
Generation 15850: Loss = 0.06997
Generation 15900: Loss = 0.08472
Generation 15950: Loss = 0.09175
Generation 16000: Loss = 0.10958
 --- Test Accuracy = 74.22%.
Generation 16050: Loss = 0.06961
Generation 16100: Loss = 0.04545
Generation 16150: Loss = 0.08067
Generation 16200: Loss = 0.09693
Generation 16250: Loss = 0.10157
Generation 16300: Loss = 0.12212
Generation 16350: Loss = 0.09758
Generation 16400: Loss = 0.11395
Generation 16450: Loss = 0.06691
Generation 16500: Loss = 0.07225
 --- Test Accuracy = 78.12%.
Generation 16550: Loss = 0.06004
Generation 16600: Loss = 0.06961
Generation 16650: Loss = 0.18485
Generation 16700: Loss = 0.08379
Generation 16750: Loss = 0.06583
Generation 16800: Loss = 0.07328
Generation 16850: Loss = 0.06474
Generation 16900: Loss = 0.09860
Generation 16950: Loss = 0.05844
Generation 17000: Loss = 0.05786
 --- Test Accuracy = 72.66%.
Generation 17050: Loss = 0.07321
Generation 17100: Loss = 0.12127
Generation 17150: Loss = 0.05446
Generation 17200: Loss = 0.03569
Generation 17250: Loss = 0.05022
Generation 17300: Loss = 0.11921
Generation 17350: Loss = 0.04017
Generation 17400: Loss = 0.04406
Generation 17450: Loss = 0.06000
Generation 17500: Loss = 0.11646
 --- Test Accuracy = 71.88%.
Generation 17550: Loss = 0.14290
Generation 17600: Loss = 0.04812
Generation 17650: Loss = 0.06327
Generation 17700: Loss = 0.13356
Generation 17750: Loss = 0.09151
Generation 17800: Loss = 0.13200
Generation 17850: Loss = 0.16656
Generation 17900: Loss = 0.05319
Generation 17950: Loss = 0.07239
Generation 18000: Loss = 0.07355
 --- Test Accuracy = 75.78%.
Generation 18050: Loss = 0.09703
Generation 18100: Loss = 0.02250
Generation 18150: Loss = 0.02004
Generation 18200: Loss = 0.12326
Generation 18250: Loss = 0.15379
Generation 18300: Loss = 0.03012
Generation 18350: Loss = 0.05909
Generation 18400: Loss = 0.05413
Generation 18450: Loss = 0.06055
Generation 18500: Loss = 0.04617
 --- Test Accuracy = 75.00%.
Generation 18550: Loss = 0.11713
Generation 18600: Loss = 0.10103
Generation 18650: Loss = 0.08530
Generation 18700: Loss = 0.07523
Generation 18750: Loss = 0.06605
Generation 18800: Loss = 0.04995
Generation 18850: Loss = 0.04141
Generation 18900: Loss = 0.04708
Generation 18950: Loss = 0.04045
Generation 19000: Loss = 0.03030
 --- Test Accuracy = 78.91%.
Generation 19050: Loss = 0.02257
Generation 19100: Loss = 0.01894
Generation 19150: Loss = 0.06192
Generation 19200: Loss = 0.15686
Generation 19250: Loss = 0.03990
Generation 19300: Loss = 0.07178
Generation 19350: Loss = 0.07857
Generation 19400: Loss = 0.06567
Generation 19450: Loss = 0.04735
Generation 19500: Loss = 0.12532
 --- Test Accuracy = 67.19%.
Generation 19550: Loss = 0.02739
Generation 19600: Loss = 0.04494
Generation 19650: Loss = 0.16667
Generation 19700: Loss = 0.08560
Generation 19750: Loss = 0.11396
Generation 19800: Loss = 0.08600
Generation 19850: Loss = 0.04694
Generation 19900: Loss = 0.06692
Generation 19950: Loss = 0.03973
Generation 20000: Loss = 0.01563
 --- Test Accuracy = 70.31%.
(tensorflow-py3.5) 
# student @ user-ubuntu in ~/work/jeff/cnn_cifar [4:24:35] 
完整日志

 

posted @ 2017-07-20 21:10  郝壹贰叁  阅读(553)  评论(0编辑  收藏  举报