Function application versus function composition performance

Hi All, My last question got me thinking (always dangerous): an expression that doubles a list and takes the 2nd element (index 1), I assume, goes through the following steps. double (double [1,2,3,4]) !! 1 double ( 1*2 : double [2,3,4]) !! 1 1*2*2 : double ( double [2,3,4] ) !! 1 1*2*2 : double ( 2*2 : double [3,4] ) !! 1 1*2*2 : 2*2*2 : double ( double [3,4] ) !! 1 2*2*2 8 Up until the element requested all the proceeding elements have to have the expression formed as Haskell process the calculation. Now, I want to compare this to using function composition ( double . double ) [ 1 ,2, 3, 4 ] !! 1 This is the bit I'm unsure of - what does the composition look like. It is easy to see that it could be simplified to something like: ( double . double) (x:xs) = x*4 : (double . double) xs This would mean the steps could be written (double . double) [ 1,2,3,4 ] !! 1 (1*4 : (double.double) [2,3,4]) !! 1 (1*4 : 2*4 : (double.double) [ 3,4 ]) !! 1 2*4 8 Ignoring the start and end steps, it will be half the number of steps compared to the application version. Significant then, over long lists. So is this true, are composite functions simplified in Haskell in general so that they may have improved performance over function application? I've seen posts on the internet that say it's a matter of style only: http://stackoverflow.com/questions/3030675/haskell-function-composition-and-.... But my reasoning suggests it could be more than that. Perhaps, function application is similarly optimised - maybe by replacing all functions applications with composition and then simplifying? Or maybe the simplifying/optimisation step never happens? As you can see I'm just guessing at things :-) But it's nice to wonder. Many thanks for any pointers, Matt.

A good functional programming language has a good "code algebra" after
parsing to which algebraic transformations can be applied for optimization.
For example, reducing the need for generating intermediate data structures.
See: fusion.
On Tue, Jun 19, 2012 at 3:01 PM, Matt Ford
Hi All,
My last question got me thinking (always dangerous): an expression that doubles a list and takes the 2nd element (index 1), I assume, goes through the following steps.
double (double [1,2,3,4]) !! 1 double ( 1*2 : double [2,3,4]) !! 1 1*2*2 : double ( double [2,3,4] ) !! 1 1*2*2 : double ( 2*2 : double [3,4] ) !! 1 1*2*2 : 2*2*2 : double ( double [3,4] ) !! 1 2*2*2 8
Up until the element requested all the proceeding elements have to have the expression formed as Haskell process the calculation.
Now, I want to compare this to using function composition
( double . double ) [ 1 ,2, 3, 4 ] !! 1
This is the bit I'm unsure of - what does the composition look like. It is easy to see that it could be simplified to something like:
( double . double) (x:xs) = x*4 : (double . double) xs
This would mean the steps could be written
(double . double) [ 1,2,3,4 ] !! 1 (1*4 : (double.double) [2,3,4]) !! 1 (1*4 : 2*4 : (double.double) [ 3,4 ]) !! 1 2*4 8
Ignoring the start and end steps, it will be half the number of steps compared to the application version. Significant then, over long lists.
So is this true, are composite functions simplified in Haskell in general so that they may have improved performance over function application?
I've seen posts on the internet that say it's a matter of style only:
http://stackoverflow.com/questions/3030675/haskell-function-composition-and-... . But my reasoning suggests it could be more than that.
Perhaps, function application is similarly optimised - maybe by replacing all functions applications with composition and then simplifying? Or maybe the simplifying/optimisation step never happens? As you can see I'm just guessing at things :-) But it's nice to wonder.
Many thanks for any pointers,
Matt.
_______________________________________________ Beginners mailing list Beginners@haskell.org http://www.haskell.org/mailman/listinfo/beginners
-- -- Regards, KC

Matt Ford
So is this true, are composite functions simplified in Haskell in general so that they may have improved performance over function application?
You may be forgetting that Haskell is lazily evaluated. If you have seen the output of compilers for strict languages like C you may find yourself somewhere between surprised and shocked to see what a Haskell compiler produces. The result is not a set of procedures, but a graph and the less nodes the higher the performance. To answer your question: A completely naive compiler will produce the most efficient code for function application. That's three nodes, one for the application node, one for the function, one for the argument. In that sense function application is in a sense an "atom". In the case of applying two functions the result will be five nodes. On the other hand the function composition operator is compiled to what is called a supercombinator. Still assuming the entirely naive compiler the result will be a graph of seven nodes. The explanation is that first you apply the supercombinator, which is five nodes already. The result of that is then applied to the argument. That makes seven nodes. In your case a function is composed with itself, so in either case sharing applies, eliminating one node. Of course this is not what actually happens when you use a compiler like GHC. As KC noted the compiler is free to perform certain transformations that don't change the semantics of the program (one of the nice features of pure languages). The produced code is the same in either case. If you want to see what is actually produced, have a look at the core output (see the ghc-core package). Bottom line: It doesn't matter whether you use nested application or composition. Greets, Ertugrul -- Not to be or to be and (not to be or to be and (not to be or to be and (not to be or to be and ... that is the list monad.
participants (3)
-
Ertugrul Söylemez
-
KC
-
Matt Ford