Thanks for replies everyone. :) I had not heard of Rice's theorem but it's not too surprising. 

A couple thoughts I had:

1. In general determining whether a function is strict is undecideable. But a conservative approximation can certainly be guaranteed to terminate. And what is stopping us from having our conservative analysis continue at runtime, as terms are evaluated by the running program? I think I could formulate some notion of having the running program incorporate all available information from terms that have already been evaluated into its analysis. This could be considered the best possible analysis that doesn't affect the termination properties of the program. (Since it does not affect evaluation order of the program or what terms are evaluated.) Of course, like Claus says there are tradeoffs here since a more accurate analysis might not be efficient enough timewise to be useful. But like I said, we already have a "budget" for doing some analysis at runtime since unnecessary memory allocation of thunks also incurs a runtime cost.

2. Having to rewrite higher-order functions so they can be inlined and specialized isn't really a general solution, IMO. It's too fragile, can't be counted on to apply in 100% of all relevant cases, and places additional conceptual burden on the programmer. You have to explicitly unpack the concept and reason about it in each case. I'd rather be able to write the most natural, idiomatic code, and have the compiler/runtime be a bit smarter! Of course, if you are just looking to eek out performance from your available tools, ad hoc solutions like this are fine. But I am asking if we could do better!

So, right now it sounds like I should maybe explore this problem a bit more to see what I find out, but tread carefully...

Best,
Paul

On Mon, Jun 15, 2009 at 1:39 PM, Claus Reinke <claus.reinke@talk21.com> wrote:
I was recently trying to figure out if there was a way, at runtime, to do
better strictness analysis for polymorphic HOFs, for which the strictness of
some arguments might depend on the strictness of the strictness of function
types that are passed as arguments [1]. As an example, consider foldl. The
'seed' parameter of foldl can be made strict as long as the binary function
used for the fold is strict in its arguments. Unfortunately, because
strictness analysis is done statically, Haskell can't assume anything about
the strictness of the binary function - assuming we only compile one
instance of foldl, it must be the most conservative version possible, and
that means making the seed parameter lazy. :-(

As has been pointed out, strictness analysis is not decidable. That doesn't
mean it is undecidable - running the code and seeing what it does gives a
naive semi-decision procedure. So strictness analysis can be made more
precise by using more and more runtime information; unfortunately it also
becomes less and less useful as a static analysis in the process (in practice,
not just termination is important, but also efficiency of the analyses, so an
analysis would tend to become unusable before it became possibly non-
terminating - there are trade-offs between precision and efficiency).

For typical higher-order functions, there's a simple workaround, already
employed in the libraries, namely to split the definition into a simple front
that may be inlined, and a recursive back where the parameter function is no longer a parameter. Then, after inlining the front at the call site, the
back can be compiled and analysed with knowledge of the parameter
function. See the comment above foldl:

http://www.haskell.org/ghc/docs/latest/html/libraries/base/src/GHC-List.html#foldl

Claus



_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe