并行编程的任务分解模型 Task Decomposition
any concurrent algorithm is going to turn out to be nothing more than a collection of concurrent tasks.
- be obvious independent function calls within the code
- be loop iterations that can be executed in any order or simultaneously
- be groups of sequential source lines that can be divided and grouped into independent computations.
常用的模式是:主进程定义和准备任务,分配给线程执行,等待所有线程执行完毕。
The most basic framework for doing concurrent work is to have the main or the process threaddefine and prepare the tasks, launch the threads to execute their tasks, and then wait until all
the spawned threads have completed.
There are many variations on this theme…
任务分解中会遇到的问题和思考:
- Are threads created and terminated for each portion of parallel execution within the application?
- Could threads be put to “sleep” when the assigned tasks are finished and then “woken up” when new tasks are available?
- Rather than blocking after the concurrent computations have launched, why not have the main thread take part in executing the set of tasks?
- Implementing any of these is simply a matter of programming logic, but they still have the basic form of preparing tasks, getting threads to do tasks, and then making sure all tasks have been completed before going on to the next computation. Is there a case in which you don’t need to wait for the entire set of tasks to complete before going to the next phase of computation?
- You may also encounter situations in which new tasks will be generated dynamically as the computation proceeds.