
On Thu, Dec 08, 2005 at 11:29:44AM -0800, Jeremy Shaw wrote:
Why should inferring uniqueness be all that fragile? A uniqueness checker can be rather robust, as is demonstrated by the Clean one.
Fragile could refer to the fact that a relatively small looking change to your code could have a enormous impact on the runtime of the code because you unknowningly changed a value from being used uniquely to being used non-uniquely.
In clean, the annotations allow you to enforce the uniqueness, so this change would be caught by the type-checker. But, if the uniqueness is *only* inferred, then the user has to be very careful about ensuring uniqueness if they want performance gains associated with it -- and they have to do it without the help of the type-checker.
Having written a bit of clean code, I can say that it is very easy to accidently un-uniquify things.
That's exactly my point, but I probably didn't express it as clearly as you did. Thanks! Best regards Tomasz -- I am searching for a programmer who is good at least in some of [Haskell, ML, C++, Linux, FreeBSD, math] for work in Warsaw, Poland