Gieno How To Achieve Speedups with Parallel Loop

The Parallel class represents a significant advancement in parallelizing managed loops.  For many common scenarios, it just works, resulting in terrific speedups.  However, while ideally Parallel.For could be all things to all people, such things rarely work out, and we’ve had to prioritize certain scenarios over others.

 

One area Parallel.For may fall a bit short is in attempts to use it with very small loop bodies, such as:

int [] array = new int[100000000]; 
Parallel.For(
0, array.Length, i=> 

    array[i] 
= i*i*i; 
});


Such an operation should be readily parallelizable; after all, every iteration is completely independent of every other iteration, and we’re dealing with a big data parallel problem.  The devil is in the details, however.  To handle typical scenarios where the time it takes to complete each iteration ends up being non-uniform, Parallel.For’s implementation takes a lot of care to load balance across all threads participating in the loop, and that load balancing comes at a small performance cost.  This cost is typically trumped by the performance benefits that come from doing the load balancing; however, when the body of the loop is as tiny as it is in the example above, even small overheads add up.  Another overhead that also contributes is the delegate invocation required to invoke the loop body.  It can be easy to forget when looking at a Parallel.For call that Parallel.For is really just a method, accepting as a parameter a delegate to be invoked for every iteration.  That invocation isn’t free, and in the above case may even be more expensive than the body of the loop itself.

Fear not, however, as there exist ways to still achieve good speedups on such cases.  One way is based on creating larger chunks of work for Parallel to operate on: as the chunk size increases, the overhead costs start to pale in comparison, and speedups are realized. 

Consider a new ForRange method you could implement:

public static ParallelLoopResult ForRange( 
    
int fromInclusive, int toExclusive, Action<intint> body);

Unlike For, which invokes the body once per iteration, ForRange will invoke the body with a start and end of a range.  Thus, given an initial sequential loop like the following:

for(int i=0; i<N; i++

    DoWork(i); 
}

With For it would be parallelized by replacing the for loop with a Parallel.For:

Parallel.For(0, N, i=> 

    DoWork(i); 
});
And with ForRange, it would be parallelized by wrapping the for loop with a ForRange:
ForRange(0, N, (from,to) => 

    
for(int i=from; i<to; i++
    { 
        DoWork(i); 
    } 
});

There are several ways we can now implement ForRange.  The first is simply by doing a little math.  We can calculate the boundaries of each range and use a Parallel.For to run the user-supplied body action for each range, e.g.

Code

Another way is actually by using Parallel.ForEach under the covers.  Rather than doing the math as was done above, we can write an iterator in C# that yields the ranges, and then Parallel.ForEach over those ranges, e.g.

Code


 posted on 2009-08-06 13:24  Gieno  阅读(210)  评论(0编辑  收藏  举报