Parallel implementation of lazy functional programming languages using abstract demand propagation

This thesis investigates the implementation of lazy functional programming languages on parallel machines using a novel approach to strictness analysis called abstract demand propagation and examines the impact of laziness on parallelism.

Although some work on strictness analysis using demand propagation has been done before, the present work is original in that it gives a precise interpretation of the notions of demands and demand propagation using non-standard denotational semantics. The intuition behind the notion of (abstract) demand propagation is that it can be seen as a form of (abstract) inverse computation.

This form of strictness analysis has been incorporated into a prototype compiler for a lazy variant of (a subset of) the Id functional programming language targeted for the TTDA dataflow machine, which makes it the first implementation of a lazy language on a dataflow architecture. The overall structure of that compiler is described together with some novel features incorporated in the compiler.

The last part of the thesis examines the question of whether sufficient parallelism can be obtained from programs written in a lazy functional language, which is done by comparing various implementation strategies: purely lazy, optimized lazy and lenient. As little parallelism can effectively be obtained using lazy strategies, the notion of bounded speculative laziness is introduced in order to tentatively improve the parallelism of the programs while preserving the key characteristic of laziness, i.e., the ability to manipulate potentially infinite data objects.

To obtain the postscript version of the thesis

Click here to obtain the postscript version.