The impact of laziness on parallelism and the limits of strictness analysis

The major question examined by this paper is whether sufficient fine-grain parallelism can be obtained from programs written in a lazy functional language. To answer this question, we have implemented a prototype compiler based on a novel approach to strictness analysis (called abstract demand propagation) and we have compared this implementation strategy (optimized lazy) with other implementations, viz., pure lazy and lenient.

Although the optimized lazy implementation improves significantly the efficiency of the resulting program over the pure lazy approach, it was found that little parallelism can effectively be identified. This remains true even when a new notion of laziness --- speculative laziness --- is introduced, notion well suited to parallel machines as it based on a parallel notion of head-strictness instead of the traditional sequential one.

Our experiments also showed that when a program's result is known to be finite, then strictness analysis can generate almost as much parallelism as can be obtained from a lenient (i.e., non-strict but non-lazy) implementation. Thus, this means strictness analysis per se is not sufficient and should be combined with some form of termination analysis and that there is little hope to extract much parallelism for programs that really require laziness.

To obtain the postscript version of the paper

Click here to obtain the postscript version.