
The answers are making this question seem trickier than I'd thought it was, because so far they (both) make it sound like structure-sharing is tied very closely to names / variables. For instance:
In Haskell, you can arrange for a large data structure to be shared by giving it a name, and then using the name wherever you'd use a pointer in some other language.
The idea here seems to be that you have a name that refers specifically to the struccture you wish to share (as opposed to needing only an expression whose value is that structure), and then there are two possible interpretations: One possible interpretation is: this is *a* way to get sharing, to show that it's possible to have shared structures. The other possible interpretation is: this is *the* (only) way to do it. What I'd expected was that it would suffice to have an expression (and indeed that implementationally it would be much like Lisp or Java so that values were always (except in exceptional cases) secretly "pointers to".) -- Jeff

In my previous example,
t = [0..] b = 3 : t c = 5 : t
lists b and c share t, but in
x = 3 : [0..] y = 5 : [0..]
lists x and y share nothing. Extensionally, they have the same values as b and c, but each has its own copy of [0..]. Unless, that is, the compiler is clever enough to recognize the subexpression [0..] which x and y have in common. I don't know whether any Haskell compilers look for common subexpressions, but it's certainly an option. So the question of whether names are "*the* (only) way" to obtain sharing isn't really a language question-- it's more of a compiler question. Thanks to Haskell's referential transparency, whether or not structures --or the expressions that define them-- are shared doesn't affect Haskell programs' semantics; it affects only their efficiency. That "only" is not intended to depreciate the significance of efficiency. It's just that one of the benefits of referential transparency is that it allows an unusually clean separation of concerns between efficiency and correctness. The absence of any update operation means that it's impossible to write a program that can detect whether a structure is being shared, so sharing is extremely common. And indeed, in the typical implementation values are nearly always implemented by pointers. In a function call, for instance, all occurrences of a parameter in the function's definition point to the same argument, thereby sharing it (another example in which sharing involves names). If the argument is an expression not in normal form, it's never evaluated more than once: On the first evaluation, the value replaces the argument, and all occurrences of the parameter share that value. --Ham At 11:07 AM -0600 12/7/01, Jeff Dalton wrote:
The answers are making this question seem trickier than I'd thought it was, because so far they (both) make it sound like structure-sharing is tied very closely to names / variables. For instance:
In Haskell, you can arrange for a large data structure to be shared by giving it a name, and then using the name wherever you'd use a pointer in some other language.
The idea here seems to be that you have a name that refers specifically to the struccture you wish to share (as opposed to needing only an expression whose value is that structure), and then there are two possible interpretations:
One possible interpretation is: this is *a* way to get sharing, to show that it's possible to have shared structures.
The other possible interpretation is: this is *the* (only) way to do it.
What I'd expected was that it would suffice to have an expression (and indeed that implementationally it would be much like Lisp or Java so that values were always (except in exceptional cases) secretly "pointers to".)
-- Jeff
------------------------------------------------------------------ Hamilton Richards, PhD Department of Computer Sciences Senior Lecturer Mail Code C0500 512-471-9525 The University of Texas at Austin Taylor Hall 5.138 Austin, Texas 78712-1188 ham@cs.utexas.edu hrichrds@swbell.net ------------------------------------------------------------------

Hamilton Richards writes: | In my previous example, | | > t = [0..] | > b = 3 : t | > c = 5 : t | | lists b and c share t, but in | | > x = 3 : [0..] | > y = 5 : [0..] | | lists x and y share nothing. Extensionally, they have the same | values as b and c, but each has its own copy of [0..]. | Unless, that is, the compiler is clever enough to recognize the | subexpression [0..] which x and y have in common. I don't know | whether any Haskell compilers look for common subexpressions, but | it's certainly an option. | So the question of whether names are "*the* (only) way" to | obtain sharing isn't really a language question-- it's more of a | compiler question. Hi. It's a difficult question for a compiler, especially as sharing isn't always a good thing. let t = [0..] b = 3 : t c = 5 : t in b !! (c !! 1000000) -- likely to have a space leak let x = 3 : [0..] y = 5 : [0..] in x !! (y !! 1000000) -- compact, unless transformed to gain sharing (The space leak arises because a million cells of t are allocated, but the garbage collector can't free them because they're still reachable via b.) Because of examples like that, I prefer to stick with a simple name-based sharing scheme. Regards, Tom
participants (3)
-
Hamilton Richards
-
Jeff Dalton
-
Tom Pledger