
In Haskell, the reason for not doing this (besides simplicity, and inertia) actually is (I think) laziness: you don't want to evaluate the "length" field of the second argument of the "cons" prematurely.
Yes, this is what I meant. I shouldn't have spoken about complexity, it was
irrelevant as you pointed it out.
2011/5/24 Johannes Waldmann
Yves Parès
writes: For instance, one of my friends asked me once why the operation of calculating the length of list has an O(n) complexity, since to his opinion, you could just store the size inside the list and increment it when elements are appended.
Then tell me, why does calculating the length of a (Haskell) list has O(n) complexity.
I could just store the length of the list - as an additional argument to the "Cons" constructor that is automatically initialized on construction (and you never need to change it later, since Haskell objects are "immutable", in the words of Java programmers)
In Haskell, the reason for not doing this (besides simplicity, and inertia) actually is (I think) laziness: you don't want to evaluate the "length" field of the second argument of the "cons" prematurely.
Well, unless you look at the "length" field of the constructed object, it won't be evaluated, but a pile of thunks will be constructed instead, and I think that's what you really want to avoid.
On the other hand, there are implementations of strict sequences with an O(1) size implemenation.
http://hackage.haskell.org/packages/archive/containers/0.4.0.0/doc/html/Data...
J.W.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe