python 进程池1 - Pool使用简介

有些情况下,所要完成的工作可以分解并独立地分布到多个工作进程,对于这种简单的情况,可以用Pool类来管理固定数目的工作进程。作业的返回值会收集并作为一个列表返回。(以下程序cpu数量为2,相关函数解释见python 进程池2 - Pool相关函数)。

 1 import multiprocessing
 2 
 3 def do_calculation(data):
 4     return data*2
 5 def start_process():
 6     print 'Starting',multiprocessing.current_process().name
 7 
 8 if __name__=='__main__':
 9     inputs=list(range(10))
10     print 'Inputs  :',inputs
11 
12     builtin_output=map(do_calculation,inputs)
13     print 'Build-In :', builtin_output
14 
15     pool_size=multiprocessing.cpu_count()*2
16     pool=multiprocessing.Pool(processes=pool_size,
17         initializer=start_process,)
18 
19     pool_outputs=pool.map(do_calculation,inputs)
20     pool.close()
21     pool.join()
22 
23     print 'Pool  :',pool_outputs

运行结果:

1 Inputs  : [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
2 Build-In : [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
3 Starting PoolWorker-2
4 Starting PoolWorker-1
5 Starting PoolWorker-3
6 Starting PoolWorker-4
7 Pool  : [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]

 

默认情况下,Pool会创建固定数目的工作进程,并向这些工作进程传递作业,直到再没有更多作业为止。maxtasksperchild参数为每个进程执行task的最大数目,设置maxtasksperchild参数可以告诉池在完成一定数量任务之后重新启动一个工作进程,来避免运行时间很长的工作进程消耗太多的系统资源。

maxtasksperchild is the number of tasks a worker process can complete before it will exit and be replaced with a fresh worker process, to enable unused resources to be freed. The default maxtasksperchild is None, which means worker processes will live as long as the pool.

Worker processes within a Pool typically live for the complete duration of the Pool’s work queue. A frequent pattern found in other systems (such as Apache, mod_wsgi, etc) to free resources held by workers is to allow a worker within a pool to complete only a set amount of work before being exiting, being cleaned up and a new process spawned to replace the old one. The maxtasksperchild argument to the Pool exposes this ability to the end user. 

 

notice

python 2.6.6

multiprocessing.Pool没有maxtaskperchild参数,Pool(processes=None, initializer=None, initargs=())

 

python 2.7.3 

Pool(processes=None, initializer=None, initargs=(), maxtasksperchild=None)

 

 1 import multiprocessing
 2 
 3 def do_calculation(data):
 4     return data*2
 5 def start_process():
 6     print 'Starting',multiprocessing.current_process().name
 7 
 8 if __name__=='__main__':
 9     inputs=list(range(10))
10     print 'Inputs  :',inputs
11 
12     builtin_output=map(do_calculation,inputs)
13     print 'Build-In :', builtin_output
14 
15     pool_size=multiprocessing.cpu_count()*2
16     pool=multiprocessing.Pool(processes=pool_size,
17         initializer=start_process,maxtasksperchild=2)
18 
19     pool_outputs=pool.map(do_calculation,inputs)
20     pool.close()
21     pool.join()
22 
23     print 'Pool  :',pool_outputs

运行结果:

 1 Inputs  : [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
 2 Build-In : [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]
 3 Starting PoolWorker-1
 4 Starting PoolWorker-2
 5 Starting PoolWorker-3
 6 Starting PoolWorker-4
 7 Starting PoolWorker-5
 8 Starting PoolWorker-6
 9 Starting PoolWorker-7
10 Starting PoolWorker-8
11 Pool  : [0, 2, 4, 6, 8, 10, 12, 14, 16, 18]

池完成其所分配的任务时,即使没有更多的工作要做,也会重新启动工作进程。从这个输出可以看到,尽管只有10个任务,而且每个工作进程一次可以完成两个任务,但是这里创建了8个工作进程。

 

更多的时候,我们不仅需要多进程执行,还需要关注每个进程的执行结果。

 1 import multiprocessing
 2 import time
 3 
 4 def func(msg):
 5     for i in xrange(3):
 6         print msg
 7         time.sleep(1)
 8     return "done " + msg
 9 
10 if __name__ == "__main__":
11     pool = multiprocessing.Pool(processes=4)
12     result = []
13     for i in xrange(10):
14         msg = "hello %d" %(i)
15         result.append(pool.apply_async(func, (msg, )))
16     pool.close()
17     pool.join()
18     for res in result:
19         print res.get()
20     print "Sub-process(es) done."

 

参考:

《Python 标准库》 10.4.17 进程池(p445)

http://www.coder4.com/archives/3352

 

原文:http://www.cnblogs.com/congbo/archive/2012/08/23/2652433.html

posted @ 2012-08-23 15:28  congbo  阅读(37312)  评论(0编辑  收藏  举报