
Another trick is lazy evaluation. Rather than building the actual data
structure fully, Haskell creates a thunk, that is, it keeps a reference to
a recipe that, when evaluated, will produce the desired value. And it does
this recursively. So when you say let foo = Foo bar baz, neither bar nor
baz are necessarily evaluated at all, you just get a recipe for a Foo based
on two other recipes for its fields. Creating another Foo based on this
one, but with one field swapped out for a different value, merely created a
new recipe, and evaluating the new data structure will not descend into the
field values that are no longer used in the recipe.
This stuff takes a while to settle in, but it's actually quite neat.
On Jul 14, 2016 8:27 AM, "Bardur Arantsson"
On 07/14/2016 08:07 AM, Christopher Howard wrote:
Hi again all. From some online research, I understand that operations on complex immutable structures (for example, a "setter" function a -> (Int, Int) -> Matrix -> Matrix which alters one element in the Matrix) is not necessarily inefficient in Haskell, because the (compiler? runtime?) has some means of sharing unchanged values between the input and output structure. What I am not clear on, however, is how this works, and how you ensure that this happens. Could someone perhaps elaborate, or point me to a really good read on the subject?
https://en.wikipedia.org/wiki/Persistent_data_structure
See in particular the section "Trees". (It basically works the same for records; just imagine that a record is a 2-level tree, where each field/label in the record is an edge from the "field label" to the "field value". Multiple levels of records work essentially work the same as multi-level trees do.)
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.