
Hi Andrew, I'd never actually thought of that, but it's somewhat less of an issue since records/data types tend to have only a small number of elements, whereas arrays tend to have a large number of elements. One thing, though, for data types (and hence records) which just occurred to me is that it seems that it really should be logarithmic in the number of fields, if the compiler/programmer needed it. For instance, one could transform:
data Foo a b c d = Foo a b c d
into
data Foo a b c d = Foo1 (Foo2 a b) (Foo2 c d) data Foo2 a b = Foo2 a b
This would make updates happen in logarithmic time, but reads would now take also log time, instead of constant time (I assume reads take constant time, though I haven't verified it). Obviously one could and should be smarter about doing this...if the program this datatype is used in tends to modify a and c simultaneously, then these should be put together. Anyway, I assume no one does this, probably because it would slow down reads...but I wonder whether one could detect that a datatype is modified frequently and, if so, transform it into this...I also wonder if it would really matter -- the constant terms would be higher, so you'd really have to have a large datatype to notice the difference, I'd imagine... - Hal -- Hal Daume III | hdaume@isi.edu "Arrest this man, he talks in maths." | www.isi.edu/~hdaume On Sun, 2 Nov 2003 ajb@spamcop.net wrote:
G'day all.
Quoting Hal Daume III
: In general, I think the name "array" for these data structures is a bit misleading, since nearly everyone expects an array to have constant time read and update, while these only have constant time read.
It's no worse than "record" in this respect. Wouldn't everyone expect a field update to be constant time, not linear in the number of fields?
Cheers, Andrew Bromage _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users