
Hello, I was wondering how expensive appending something to a list really is. Say I write I'd say "longList ++ [5]" stays unevaluated until I consumed the whole list and then appending should go in O(1). Similarly when concatenating two lists. Is that true, or am I missing something? Adrian

Hi, Am Donnerstag, den 29.05.2008, 19:04 +0200 schrieb Adrian Neumann:
I was wondering how expensive appending something to a list really is. Say I write
I'd say "longList ++ [5]" stays unevaluated until I consumed the whole list and then appending should go in O(1). Similarly when concatenating two lists.
Is that true, or am I missing something?
I’m no expert, but I give it shot: The problem is that longList might be referenced somewhere else as well, so it has to be kept around, ending in [], not in [5]. But (longList ++ [5]) also might be referenced somewhere, so you also need to keep that list. Thus you have to copy the whole list structure for the appending (not the values, though). For comparision, with (5:longList), the new list can use the old list unmodified, so nothing has to be copied. You can also observe this in the code for (++): (++) :: [a] -> [a] -> [a] (++) [] ys = ys (++) (x:xs) ys = x : xs ++ ys where you can see that on the right hand side, a totally new list is constructed. In some cases, e.g. when the longList is only referenced there and nowhere else, one might hope that the compiler can optimize this problem away. There is some hope, as I see this in the code: {-# RULES "++" [~1] forall xs ys. xs ++ ys = augment (\c n -> foldr c n xs) ys #-} Maybe some core-literate people can give more information on this? Greetings, Joachim -- Joachim "nomeata" Breitner mail: mail@joachim-breitner.de | ICQ# 74513189 | GPG-Key: 4743206C JID: nomeata@joachim-breitner.de | http://www.joachim-breitner.de/ Debian Developer: nomeata@debian.org

Adrian Neumann wrote:
Hello,
I was wondering how expensive appending something to a list really is. Say I write
I'd say "longList ++ [5]" stays unevaluated until I consumed the whole list and then appending should go in O(1). Similarly when concatenating two lists.
Is that true, or am I missing something?
I think that is true and you are missing something: You have to push the call to ++ through the whole longList while consuming it wholy one element at a time! So when longList has n elements, you have (n+1) calls of ++, each returning after O(1) steps. The first n calls return a list with the ++ pushed down, and the last returns [5]. Summed together, that makes O(n) actual calls of ++ for one written by the programmer. Tillmann

On Thu, May 29, 2008 at 11:48 PM, Tillmann Rendel
Adrian Neumann wrote:
Hello,
I was wondering how expensive appending something to a list really is. Say I write
I'd say "longList ++ [5]" stays unevaluated until I consumed the whole list and then appending should go in O(1). Similarly when concatenating two lists.
Is that true, or am I missing something?
I think that is true and you are missing something: You have to push the call to ++ through the whole longList while consuming it wholy one element at a time! So when longList has n elements, you have (n+1) calls of ++, each returning after O(1) steps. The first n calls return a list with the ++ pushed down, and the last returns [5]. Summed together, that makes O(n) actual calls of ++ for one written by the programmer.
Tillmann
In other words, if you look at the prototype of ++ given in the prelude, it generates a new list with first (length longList) elements same as those of longList, followed by the second list. So when you are accessing elements of (longList ++ s), you are actually accessing the elements of this newly generated list, which are generated as and when you access them, so that by the time you reach the first element of s, you have generated (length longList) elements of the result of ++.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

My $0.02 is to say -- O(1) longList ++ [5] Yay. I've got a thunk. Oh wait, I need to access the '5'? No different than doing so for -- O(n) until ((==5) . head) [l,o,n,g,L,i,s,t,5] It's not the (++) that's O(n). It's the list traversal. I can further beat this pedantic point to death by pointing out there is no difference between longList ++ [5] and longList ++ (repeat 5) Getting to the first 5 is still O(n). Cheers, -ljr Tillmann Rendel wrote:
Adrian Neumann wrote:
Hello,
I was wondering how expensive appending something to a list really is. Say I write
I'd say "longList ++ [5]" stays unevaluated until I consumed the whole list and then appending should go in O(1). Similarly when concatenating two lists.
Is that true, or am I missing something?
I think that is true and you are missing something: You have to push the call to ++ through the whole longList while consuming it wholy one element at a time! So when longList has n elements, you have (n+1) calls of ++, each returning after O(1) steps. The first n calls return a list with the ++ pushed down, and the last returns [5]. Summed together, that makes O(n) actual calls of ++ for one written by the programmer.
Tillmann _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
--
Lanny Ripple

Lanny Ripple wrote:
My $0.02 is to say
-- O(1) longList ++ [5]
Yay. I've got a thunk. Oh wait, I need to access the '5'? No different than doing so for
-- O(n) until ((==5) . head) [l,o,n,g,L,i,s,t,5]
It's not the (++) that's O(n). It's the list traversal.
Lets look at the actual reductions going on. To make the example easier, I would like to use last instead of your complicated until. It shouldn't make a difference. Lets look at the reduction of (last "longList") to whnf first: L) last "longList" L) ~> last "ongList" L) ~> last "ngList" L) ~> last "gList" L) ~> last "List" L) ~> last "ist" L) ~> last "st" L) ~> last "t" ~> 't' The L prefixed marks all lines which are reduced by calls to last. Clearly, we need n reduction steps here. Now, what about last ("longList" ++ "!")? A) last ("longList" ++ "!") L) ~> last ('l' : ("ongList" ++ "!")) A) ~> last ("ongList" ++ "!") L) ~> last ('o' : ("ngList" ++ "!")) A) ~> last ("ngList" ++ "!") L) ~> last ('n' : ("gList" ++ "!")) A) ~> last ("gList" ++ "!") L) ~> last ('g' : ("List" ++ "!")) A) ~> last ("List" ++ "!") L) ~> last ('L' : ("ist" ++ "!")) A) ~> last ("ist" ++ "!") L) ~> last ('i' : ("st" ++ "!")) A) ~> last ("st" ++ "!") L) ~> last ('s' : ("t" ++ "!")) A) ~> last ("t" ++ "!") L) ~> last ('t' : ("" ++ "!")) A) ~> last ("" ++ "!") L) ~> last "!" ~> '!' Calls to ++ are marked with A (for append). Now, we have to reduce a call to ++ everytime before we can reduce a call to last, so we have n steps for calls of last as before + n steps for interleaved calls of ++ + 1 step for the final call of ++ + 1 step for the final call of last = 2n + 2 steps in total The difference between (2n + 2) and (n) is (n + 2) and lies clearly in O(n). So, by the addition of ++ "!" to our program, we had to do O(n) reduction steps more. Since we had to do O(n) reductions steps anyway, this didn't show up in the overall complexity, but our program is only half as fast, instead of constant amount slower, which seems to make a difference to me. And other programs could suffer even more badly, since their complexity could go up, e.g., from O(n) to O(n^2). A simple example is this naive reverse function: reverse [] = [] reverse (x:xs) = reverse xs ++ [x] let's see how (last (reverse "long")) is reduced to whnf. I will not even attempt "longList" ... R) last (reverse "long") R) ~> last (reverse "ong" ++ "l") R) ~> last ((reverse "ng" ++ "o") ++ "l") R) ~> last (((reverse "g" ++ "n") ++ "o") ++ "l") R) ~> last ((((reverse "" ++ "g") ++ "n") ++ "o") ++ "l") R) ~> last (((("" ++ "g") ++ "n") ++ "o") ++ "l") At this point, we have reduced reverse in n steps to an expression containing n calls to ++. If ++ were O(1), we would need only O(n) additional steps to finish the job. But see what happens: last (((("" ++ "g") ++ "n") ++ "o") ++ "l") A) ~> last ((("g" ++ "n") ++ "o") ++ "l") The first ++ was easy, only 1 reduction step. last ((("g" ++ "n") ++ "o") ++ "l") A) ~> last ((('g' : ("" ++ "n")) ++ "o") ++ "l") A) ~> last (('g' : (("" ++ "n") ++ "o")) ++ "l") A) ~> last ('g' : ((("" ++ "n") ++ "o") ++ "l")) L) ~> last ((("" ++ "n") ++ "o") ++ "l") A) ~> last (("n" ++ "o") ++ "l") Oups, for the second ++, we needed n reduction steps to move the first char out of all these nested ++'s. last (("n" ++ "o") ++ "l") A) ~> last (('n' : ("" ++ "o")) ++ "l") A) ~> last ('n' : (("" ++ "o") ++ "l")) L) ~> last (("" ++ "o") ++ "l") A) ~> last ("o" ++ "l") Another (n - 1) reduction steps for the second ++ to go away. last ("o" ++ "l") A) ~> last ('o' : "" ++ "l")) L) ~> last ("" ++ "l") A) ~> last ("l") L) ~> 'l' And the third and fourth ++ go away with (n - 2) and (n - 3) reduction steps. Counting together, we had to use n + (n - 1) + (n - 2) + ... = n! reduction steps to get rid of the n calls to ++, which lies in O(n^2). Thats what we expected of course, since we know that each of the ++ would need O(n) steps.
I can further beat this pedantic point to death by pointing out there is no difference between
longList ++ [5]
and
longList ++ (repeat 5)
Getting to the first 5 is still O(n).
That's a different question. For the complexity of ++, the right-hand side operand is irrelevant. The n means the length of the left-hand side operand here. Tillmann

Tillmann Rendel
Another (n - 1) reduction steps for the second ++ to go away.
last ("o" ++ "l") A) ~> last ('o' : "" ++ "l")) L) ~> last ("" ++ "l") A) ~> last ("l") L) ~> 'l'
And the third and fourth ++ go away with (n - 2) and (n - 3) reduction steps. Counting together, we had to use
n + (n - 1) + (n - 2) + ... = n!
reduction steps to get rid of the n calls to ++, which lies in O(n^2). Thats what we expected of course, since we know that each of the ++ would need O(n) steps.
I really liked the excellent and very clear analysis! But the last formula should be: n + (n - 1) + (n - 2) + ... = n * (n+1) / 2 which is still of order n^2. -- Martin Geisler VIFF (Virtual Ideal Functionality Framework) brings easy and efficient SMPC (Secure Multi-Party Computation) to Python. See: http://viff.dk/.

I think I would like to make another note: when we talk about the complexity
of a function, we are talking about the time taken to completely evaluate
the result. Otherwise any expression in haskell will be O(1), since it just
creates a thunk.
And then the user of the program is to be blamed for running the program,
since that is what caused evaluation of those thunks :)
Abhay
2008/5/31 Martin Geisler
Tillmann Rendel
writes: Hi! (Cool, another guy from DAIMI on haskell-cafe!)
Another (n - 1) reduction steps for the second ++ to go away.
last ("o" ++ "l") A) ~> last ('o' : "" ++ "l")) L) ~> last ("" ++ "l") A) ~> last ("l") L) ~> 'l'
And the third and fourth ++ go away with (n - 2) and (n - 3) reduction steps. Counting together, we had to use
n + (n - 1) + (n - 2) + ... = n!
reduction steps to get rid of the n calls to ++, which lies in O(n^2). Thats what we expected of course, since we know that each of the ++ would need O(n) steps.
I really liked the excellent and very clear analysis! But the last formula should be:
n + (n - 1) + (n - 2) + ... = n * (n+1) / 2
which is still of order n^2.
-- Martin Geisler
VIFF (Virtual Ideal Functionality Framework) brings easy and efficient SMPC (Secure Multi-Party Computation) to Python. See: http://viff.dk/.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Abhay Parvate wrote:
I think I would like to make another note: when we talk about the complexity of a function, we are talking about the time taken to completely evaluate the result. Otherwise any expression in haskell will be O(1), since it just creates a thunk.
I don't like this notion of complexity, since it seems not very suited for the analysis of composite expression in Haskell. For example, repeat 42 has infinite complexity according to your definition (it doesn't even terminate if completely evaluated), but take 5 $ repeat 42 has only constant complexity even if fully evaluated. It is not clear how to reuse the finding about the complexity of (repeat 42) to determine the complexity of (take 5). Instead, my view of complexity in lazy languages includes the interesting behaviour of the rest of the program as variables. For example, (repeat 42) needs O(n) steps to produce the first n elements of its output. Now, (take 5 x) restricts x to the first 5 elements, so (take 5 $ repeat 42) needs O(min(n, 5)) = O(1) steps to produce the first n elements of its output. Is this intuitive view generalizable to arbitrary datatypes (instead of lists) and formalized somewhere? Tillmann

Tillmann Rendel wrote:
Abhay Parvate wrote:
I think I would like to make another note: when we talk about the complexity of a function, we are talking about the time taken to completely evaluate the result. Otherwise any expression in haskell will be O(1), since it just creates a thunk.
I don't like this notion of complexity, since it seems not very suited for the analysis of composite expression in Haskell.
Is this intuitive view generalizable to arbitrary datatypes (instead of lists) and formalized somewhere?
See also the thread section beginning with http://thread.gmane.org/gmane.comp.lang.haskell.cafe/34398/focus=34435 Regards, apfelmus

I somehow thought it would be easy to talk about complexity of calculating
individual elements in an infinite list should be sufficient, but that seems
to be involved, and my over-generalization doesn't seem to work. Thanks for
the link; particularly it has reference to Wadler's papers exactly on this
problem.
Abhay
On Sun, Jun 1, 2008 at 1:07 PM, apfelmus
Tillmann Rendel wrote:
Abhay Parvate wrote:
I think I would like to make another note: when we talk about the complexity of a function, we are talking about the time taken to completely evaluate the result. Otherwise any expression in haskell will be O(1), since it just creates a thunk.
I don't like this notion of complexity, since it seems not very suited for the analysis of composite expression in Haskell.
Is this intuitive view generalizable to arbitrary datatypes (instead of lists) and formalized somewhere?
See also the thread section beginning with
http://thread.gmane.org/gmane.comp.lang.haskell.cafe/34398/focus=34435
Regards, apfelmus
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Abhay Parvate wrote:
I somehow thought it would be easy to talk about complexity of calculating individual elements in an infinite list should be sufficient, but that seems to be involved, and my over-generalization doesn't seem to work. Thanks for the link; particularly it has reference to Wadler's papers exactly on this problem.
Note however that Wadler's and similar formalisms are still a unsatisfactory in that they are quite clumsy to work with, it's quite tedious/impossible to analyze examples with a lot of lazy evaluation. But they are a good guideline. In his book about purely functional data structures [1], Okasaki takes a different approach; each node of a data structure is given a debit, a cost to evaluate it. For instance, consider xs = x1 : x2 : x3 : ... : xn : [] 1 1 1 ... 1 1 0 ys = y1 : y2 : y3 : ... : ym : [] 1 1 1 ... 1 1 0 The numbers below indicate the time it takes to evaluate the node to weak head normal form. For demonstration purposes, I arbitrarily chose 1 for each (:) here. The combined list will then have debits like xs ++ ys = x1 : x2 : x3 : ... : xn : y1 : y2 : y3 : ... : ym : [] 2 2 2 ... 2 2 1 1 1 ... 1 1 0 In other words, the ys list is copied verbatim but each element of xs incurs an additional cost of 1, corresponding to one step in the evaluation of the concatenation with (++). In order to force/inspect a constructor/node, you have to pay off its debits first. In the above example, head (xs ++ ys) would have to pay 2 units of time (one unit for head xs and one for the (++)). Now, the thing about debits is that we can relocate them to the top and only overestimate the total running time if we do that. For instance, we could push all debits to the top xs ++ ys = x1 : x2 : x3 : ... : xn : y1 : y2 : y3 : ... : ym : [] 2n+m 0 0 ... 0 0 0 0 0 ... 0 0 0 so that evaluating head (xs ++ ys) is now estimated to cost (2n+m) units of time while the rest is free/fully evaluated. The above example is rather useless, but consider the case n == m and xs = x1 : x2 : x3 : ... : xn : [] 0 0 0 ... 0 0 0 ys = y1 : y2 : y3 : ... : yn : [] 0 0 0 ... 0 0 0 i.e. two fully evaluated lists of the same length. Then, we have xs ++ reverse ys = x1 : x2 : x3 : ... : xn : yn : y{n-1} : ... : y1 : [] 1 1 1 ... 1 1 n 0 ... 0 0 0 because reversing the list ys is "monolithic", i.e. looking at its head already forces the tail of the list. But now, we can distribute the debits upwards xs ++ reverse ys = x1 : x2 : x3 : ... : xn : yn : y{n-1} : ... : y1 : [] 2 2 2 ... 2 2 0 0 ... 0 0 0 and thereby amortize the cost of reversing the second lists over the n elements of the first list. This is used in the implementation of purely functional queues, see also Okasaki's book. [1]: Chris Okasaki. Purely Function Data Structures. http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf (This is the thesis on which the book is based.) Regards, apfelmus

Thank you, apfelmus. That was a wonderful explanation; the debit method in [1] finally makes sense. [1]: Chris Okasaki. Purely Function Data Structures. http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf

Ronald Guida wrote:
Thank you, apfelmus. That was a wonderful explanation; the debit method in [1] finally makes sense.
A diagram says more than a thousand words :) My explanation is not entirely faithful to Okasaki, let me elaborate. In his book, Okasaki calls the process of transferring the debits from the input
xs = x1 : x2 : x3 : ... : xn : [] 1 1 1 ... 1 1 0
ys = y1 : y2 : y3 : ... : ym : [] 1 1 1 ... 1 1 0
to the output
xs ++ ys = x1 : x2 : x3 : ... : xn : y1 : y2 : y3 : ... : ym : [] 2 2 2 ... 2 2 1 1 1 ... 1 1 0
"debit inheritance". In other words, the debits of xs and ys (here 1 at each node) are carried over to xs ++ ys (in addition to the debits created by ++ itself). In the thesis, he doesn't give it an explicit name, but discusses this phenomenon in the very last paragraphs of chapter 3.4 . The act of relocating debits from child to parent nodes as exemplified with
xs ++ reverse ys = x1 : x2 : x3 : ... : xn : yn : y{n-1} : ... : y1 : [] 1 1 1 ... 1 1 n 0 ... 0 0 0
xs ++ reverse ys = x1 : x2 : x3 : ... : xn : yn : y{n-1} : ... : y1 : [] 2 2 2 ... 2 2 0 0 ... 0 0 0
is called "debit passing", but Okasaki doesn't use it earlier than in the chapter "Implicit recursive slowdown". But the example I gave here is useful for understand the scheduled implementation of real time queues. The trick there is to not create a "big" suspension with n debits but to really "physically" distribute them across the data structure x1 : x2 : x3 : ... : xn : yn : y{n-1} : ... : y1 : [] 2 2 2 ... 2 2 2 2 ... 2 2 2 and discharge them by forcing a node with every call to snoc . I say "physically" because this forcing performs actual work, it does not simply "mentally" discharge a debit to amortize work that will be done later. Note that the 2 debits added to each yi are an overestimation here, but the real time queue implementation pays for them nonetheless. My focus on debit passing in the original explanation might suggest that debits can only be discharged when actually evaluating the node to which the debit was assigned. This is not the case, an operation may discharge any debits, even in parts of the data structure that it doesn't touch. Of course, it must discharge debits of nodes it does touch. For instance, in the proof of theorem 3.1 (thesis) for queues, Okasaki writes "We can restore the invariant by discharging the first (two) debit(s) in the queue" without bothering to analyze which node this will be. So, the front queue might look like f1 : f2 : f3 : ... : fn : f{n+1} : f{n+2} : ... : fm : [] 0 0 1 ... 1 1 n 0 ... 0 0 0 and it's one of the nodes that carries one debit, or it could look like f2 : f3 : ... : fn : f{n+1} : f{n+2} : ... : fm : [] 0 0 ... 0 0 n-3 0 ... 0 0 0 and it's the node with the large amount of debits. In fact, it's not even immediate that these two are the only possibilities. However, with the debit passing from my previous post, it's easier to say which node will be discharged. But even then, only tail discharges exactly the debits of nodes it inspects while the snoc operation discharges debits in the untouched front list. Of course, as soon as identifying the nodes becomes tractable, chances are that you can turn it into a real-time data structure. Another good example are skew heaps from [2]: Chris Okasaki. Fun with binary heap trees. in J. Gibbons, O. de Moor. The Fun of Programming. http://www.palgrave.com/PDFs/0333992857.Pdf Here, the "good" nodes are annotated with one debit. Every join operation discharges O(log n) of them and allocates new ones while walking down the tree, but the "time" to actually walk down the tree is not counted immediately. This is just like (++) walks down the first list and allocates debits without immediately using O(n) time to do that. Regards, apfelmus PS: In a sense, discharging arbitrary debits can still be explained with debit passing: first pass those debits to the top and the discharge them because any operation has to inspect the top.

Lets look at the actual reductions going on. To make the example easier, I would like to use last instead of your complicated until. It shouldn't make a difference.
[ winds up doing twice as much work ] This was also my intuition. I had a function that built up a large output list by generating chunks and ++ them onto the output (e.g. basically concatMap). My intuition was that this would get gradually slower because each (++) implied a copy of the previous part of the list. So if I have another function consuming the eventual output it will get slower and slower because demanding an element will reduce the (++)s for each chunk, and copy the element 1000 times if I have appended 1000 chunks by now. So I switched to using DList, which has O(1) append. Then I ran timing on a list that wound up adding up a million or so chunks... and both versions were exactly the same speed (and -O2 gave both an equally big speed boost). In fact, when I try with a toy example, the DList using one is much slower than the concatMap using one, which I don't quite understand. Shouldn't concatMap be quite inefficient? The program below gives me this output: 15000000 45.7416 15000000 110.2112 -O2 brings both implementations to 45. Interestingly, if I call 't1' from ghci, it sucks up 1gb of memory and slows the system to a crawl until I kill it.. this is *after* it's computed a result and given me my prompt back... even if its somehow keeping the whole list around in some ghci variable (where?), isn't 1gb+ a lot even for 15m boxed Integers? And why does it continue to grow after the function has completed? The compiled version doesn't have this problem. import System.CPUTime import qualified Data.DList as DList dconcat_map f xs = DList.toList (DList.concat (map (DList.fromList . f) xs)) mkchunk n = [n, n*2, n*3] main = do t1 t2 t1 = do let a = concatMap mkchunk [0..5000000] t <- getCPUTime print (last a) t2 <- getCPUTime print (fromIntegral (t2 - t) / fromIntegral cpuTimePrecision) t2 = do let a = dconcat_map mkchunk [0..5000000] t <- getCPUTime print (last a) t2 <- getCPUTime print (fromIntegral (t2 - t) / fromIntegral cpuTimePrecision)

On Wed, 2008-06-11 at 17:18 -0700, Evan Laforge wrote:
Lets look at the actual reductions going on. To make the example easier, I would like to use last instead of your complicated until. It shouldn't make a difference.
[ winds up doing twice as much work ]
This was also my intuition. I had a function that built up a large output list by generating chunks and ++ them onto the output (e.g. basically concatMap). My intuition was that this would get gradually slower because each (++) implied a copy of the previous part of the list. So if I have another function consuming the eventual output it will get slower and slower because demanding an element will reduce the (++)s for each chunk, and copy the element 1000 times if I have appended 1000 chunks by now.
So I switched to using DList, which has O(1) append. Then I ran timing on a list that wound up adding up a million or so chunks... and both versions were exactly the same speed (and -O2 gave both an equally big speed boost).
In fact, when I try with a toy example, the DList using one is much slower than the concatMap using one, which I don't quite understand. Shouldn't concatMap be quite inefficient?
concatMap is very efficient. (++) isn't slow. Left associative uses of (++) are slow. concatMap = foldr ((++) . f) []
The program below gives me this output:
15000000 45.7416 15000000 110.2112
-O2 brings both implementations to 45.
Interestingly, if I call 't1' from ghci, it sucks up 1gb of memory and slows the system to a crawl until I kill it.. this is *after* it's computed a result and given me my prompt back... even if its somehow keeping the whole list around in some ghci variable (where?), isn't 1gb+ a lot even for 15m boxed Integers? And why does it continue to grow after the function has completed? The compiled version doesn't have this problem.
import System.CPUTime import qualified Data.DList as DList
dconcat_map f xs = DList.toList (DList.concat (map (DList.fromList . f) xs))
mkchunk n = [n, n*2, n*3]
main = do t1 t2
t1 = do let a = concatMap mkchunk [0..5000000] t <- getCPUTime print (last a) t2 <- getCPUTime print (fromIntegral (t2 - t) / fromIntegral cpuTimePrecision)
t2 = do let a = dconcat_map mkchunk [0..5000000] t <- getCPUTime print (last a) t2 <- getCPUTime print (fromIntegral (t2 - t) / fromIntegral cpuTimePrecision) _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
participants (10)
-
Abhay Parvate
-
Adrian Neumann
-
apfelmus
-
Derek Elkins
-
Evan Laforge
-
Joachim Breitner
-
Lanny Ripple
-
Martin Geisler
-
Ronald Guida
-
Tillmann Rendel