【转载】 tf.train.slice_input_producer()和tf.train.batch()

原文地址:

https://www.jianshu.com/p/8ba9cfc738c2

 

 

 

------------------------------------------------------------------------------------------------

 

 

 

 

1.          tf.train.slice_input_producer  函数,一种模型数据的排队输入方法。

复制代码
tf.train.slice_input_producer(
    tensor_list,
    num_epochs=None, 
    shuffle=True,
    seed=None,
    capacity=32,
    shared_name=None,
    name=None
)
复制代码

 

 

其参量为:

复制代码
Args:
tensor_list: A list of Tensor objects. 
Every Tensor
in tensor_list must have the same size in the first dimension.

# 循环Queue输入次数 num_epochs: An integer (optional).
If specified, slice_input_producer produces each slice num_epochs times before generating an OutOfRange error.
If not specified, slice_input_producer can cycle through the slices an unlimited number of times.
shuffle: Boolean. If true, the integers are randomly shuffled within each epoch. seed: An integer (optional). Seed used
if shuffle == True.

# Queue的容量 capacity: An integer. Sets the queue capacity. shared_name: (optional). If set, this queue will be shared under the given name across multiple sessions. name: A name for the operations (optional).
复制代码

 

 

 

 

相关代码实例:

复制代码
    # 生成包含输入和目标图片地址名的list
    input_files = [os.path.join(dirname, 'input', f) for f in flist]
    output_files = [os.path.join(dirname, 'output', f) for f in flist]

    # 内部自动转换为Constant String的Tensor,并排队进入队列
    input_queue, output_queue = tf.train.slice_input_producer(
        [input_files, output_files], shuffle=self.shuffle,
        seed=0123, num_epochs=self.num_epochs)

    # tf.train.slice_input_producer()每次取一对【输入-目标】对,交给ReadFile这
    # 个Op
    input_file = tf.read_file(input_queue)
    output_file = tf.read_file(output_queue)
    
    # 生成RGB格式的图像tensor
    im_input = tf.image.decode_jpeg(input_file, channels=3)
    im_output = tf.image.decode_jpeg(output_file, channels=3)
复制代码

 

 

 

 

 

 

 

 

 

 

 

2.          tf.train.batch()函数

复制代码
tf.train.batch(
    tensors,
    batch_size,
    num_threads=1,
    capacity=32,
    enqueue_many=False,
    shapes=None,
    dynamic_pad=False,
    allow_smaller_final_batch=False,
    shared_name=None,
    name=None
)
复制代码

 

 

 

其参量为:

复制代码
Args:
tensors: The list or dictionary of tensors to enqueue.
batch_size: The new batch size pulled from the queue.
num_threads: The number of threads enqueuing tensors. The batching will be nondeterministic if num_threads > 1.
capacity: An integer. The maximum number of elements in the queue.
#进行shuffle的输入是否为单个tensor enqueue_many: Whether each tensor in tensors is a single example. shapes: (Optional) The shapes for each example. Defaults to the inferred shapes for tensors.

dynamic_pad: Boolean.
Allow variable dimensions
in input shapes.
The given dimensions are padded upon dequeue so that tensors within a batch have the same shapes.

allow_smaller_final_batch: (Optional) Boolean.
If True, allow the final batch to be smaller
if there are insufficient items left in the queue.

shared_name: (Optional).
If set, this queue will be shared under the given name across multiple sessions.

name: (Optional) A name
for the operations.
复制代码

 

 

 

相关代码实例

samples = tf.train.batch(
        sample,
        batch_size=self.batch_size,
        num_threads=self.nthreads,
        capacity=self.capacity)

 

posted on   Angry_Panda  阅读(400)  评论(0编辑  收藏  举报

编辑推荐:
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
历史上的今天:
2018-06-01 AI产业将更凸显个人英雄主义 周志华老师的观点是如此的有深度
2018-06-01 对什么样的人应该敬而远之
2018-06-01 博士一般都会具有哪些特征 (读博的一些感触)

导航

< 2025年3月 >
23 24 25 26 27 28 1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31 1 2 3 4 5

统计

点击右上角即可分享
微信分享提示