
Fellow Haskellers, The 3rd release candidate for Edison 1.2 is now avaliable. URLs to the project will appear at the end of this message. As with previous pre-releases, I am requesting comments on the API in particular. I believe I am approaching the point where I will feel comfortable freezing the 1.2 API, and I hope this will be the last release candidate before a 1.2 final. Major changes in 1.2rc3 are: * introduce the ambiguous/unambiguous concept and document all API operations * factor out methods which "mirror" superclass methods and make them alises instead * add lookupAndDelete* methods to associated collections * change the type of adjustOrDelete* in associated collections * rename subset/subsetEq to properSubset/subset * add matching Read and Show instances for all concrete datastructures * add properSubmap{By} submap{By} and sameMap{By} to the associated collection API * add Eq instances for concrete associated collections * break out the test suite into a separate sub-package The major remaining API issues I wish to request comments about have to do with class instances: 1) RC3 introduces Read and Show instances for all native Edison datastructures. However, StandardMap and StandardSet are just type aliases for Data.Map and Data.Set -- thus I can't provide Read and Show instances that are similar. So the question is, should I instead newtype these implementations so I can provide class instances which are consistent (as well as any other class instances I end up providing)? 2) Regarding Typeable and Data, what are your thoughts about the best way to provide Typeable and Data instances over abstract data types? I've considered playing the same trick that I do for Read/Show (which is internal conversions to/from lists). What do you think? 3) Are there any other class instances that are important to supply? And one final meta-API question; what are your thoughts about portability in general? I would like to support as many implementations as possible, but I'm already relying on MPTC, fundeps and undecidable instances (which may or may not be in H'), and I'm considering the following: -- instances of Typeable/Data (listed as non-portable) -- assertions (for checking preconditions) -- newtype deriving Anyway, I'm interested in your thoughts about how important portability is. ---------------------------------------- How to get Edison: Project page: http://www.eecs.tufts.edu/~rdocki01/edison.html Docs: http://www.eecs.tufts.edu/~rdocki01/docs/edison/index.html Source tarball: http://www.eecs.tufts.edu/~rdocki01/projects/edison-1.2rc3-source.tar.gz Darcs repo: http://www.eecs.tufts.edu/~rdocki01/edison/ Thanks! Rob Dockins

Hello Robert, Wednesday, April 5, 2006, 3:50:41 AM, you wrote:
And one final meta-API question; what are your thoughts about portability in general? I would like to support as many implementations as possible, but I'm already relying on MPTC, fundeps and undecidable instances (which may or may not be in H'), and I'm considering the following:
-- instances of Typeable/Data (listed as non-portable) -- assertions (for checking preconditions) -- newtype deriving
Anyway, I'm interested in your thoughts about how important portability is.
according to my experience - one can't write real programs in H98. one of the reasons why H' was started is to define standard for writing such programs. on the current moment GHC and Hugs are close enough to that we should see in this standard. They supports MPTC, FD, Typeable, more flexible instances declarations, and i think that you can work with assertions via preprocessor. so i recommend you to use GHC+Hugs compatibility as the first goal but avoid using of features that they both support but there is no exact definition (such as undecidable instances), as i itself do. This allows you and your users to use two the best Haskell implementations - one for debugging, other for production code, and almost ensure that your code will be H' compatible or require minimal changes. about Typeable - i attached file from ghc sources that allows to define instances both for Hugs and GHC. you can see examples of it's use in base libraries. it works for me for ghc641 and hugs2005 about undecidable instances - hugs and ghc 6.5 contains more liberate rule for checking decidability and i hope that it will be included in the H', see details on the http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/FlexibleInstan... and last but not least ;) language extensions that we absolutely need to use will probably go into the standard. may be the library/apps authors that use some extensions should write about this on the appropriate H' proposals pages to let committee know? ps: may be, we can/should create page with the list of extensions that probably will go into the standard? or just adjust the corresponding fields in the existing table? such info would be very helpful for developers, imho. not anyone has enough time and willing to follow all of the H' discussions -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

[removing cross-posing to H'] On Apr 5, 2006, at 4:54 AM, Bulat Ziganshin wrote:
Hello Robert,
Wednesday, April 5, 2006, 3:50:41 AM, you wrote:
And one final meta-API question; what are your thoughts about portability in general? I would like to support as many implementations as possible, but I'm already relying on MPTC, fundeps and undecidable instances (which may or may not be in H'), and I'm considering the following:
-- instances of Typeable/Data (listed as non-portable) -- assertions (for checking preconditions) -- newtype deriving
Anyway, I'm interested in your thoughts about how important portability is.
according to my experience - one can't write real programs in H98. one of the reasons why H' was started is to define standard for writing such programs. on the current moment GHC and Hugs are close enough to that we should see in this standard. They supports MPTC, FD, Typeable, more flexible instances declarations, and i think that you can work with assertions via preprocessor. so i recommend you to use GHC+Hugs compatibility as the first goal but avoid using of features that they both support but there is no exact definition (such as undecidable instances), as i itself do. This allows you and your users to use two the best Haskell implementations - one for debugging, other for production code, and almost ensure that your code will be H' compatible or require minimal changes.
This is my sincere hope. Edison has been waiting for almost ten years for Haskell to develop to the point that it can be standards- compliant!
about Typeable - i attached file from ghc sources that allows to define instances both for Hugs and GHC. you can see examples of it's use in base libraries. it works for me for ghc641 and hugs2005
This is interesting. However, I think I'd like to stay away from dipping into C for this project.
about undecidable instances - hugs and ghc 6.5 contains more liberate rule for checking decidability and i hope that it will be included in the H', see details on the http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/ FlexibleInstances
and last but not least ;) language extensions that we absolutely need to use will probably go into the standard. may be the library/apps authors that use some extensions should write about this on the appropriate H' proposals pages to let committee know?
That is a good idea.
ps: may be, we can/should create page with the list of extensions that probably will go into the standard? or just adjust the corresponding fields in the existing table? such info would be very helpful for developers, imho. not anyone has enough time and willing to follow all of the H' discussions
Perhaps. However, I've been following the discussion, and I'm still having a hard time telling which way the wind is blowing, especially with respect to MPTC. I think we may have to wait a little longer before we start drawing up the odds.
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com
Thanks for your comments! Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

On Wed, 05 Apr 2006, Robert Dockins
The 3rd release candidate for Edison 1.2 is now avaliable.
In the documentation you write: An instance of an abstract data type is considered indistinguishable from another if all possible applications of unambiguous operations to both yield indistinguishable results. In this context, we consider bottom to be distinguishable from non-bottom results, and indistinguishable from other bottom results. I believe this wording is somewhat too strong. It is usually very hard to achieve what you are claiming above, since (among other things) bottom and const bottom are distinguishable. Typically when Haskellers claim various properties for their functions they ignore bottoms (and often also infinite values), reasoning in a "fast and loose" way. This is seldom explicitly mentioned, though, so I guess many may not be aware of this. Your statement makes it seem as if you have done more than the ordinary, fast and loose, reasoning, and I find it misleading. Of course, if you have actually gone to the trouble of verifying your statement for all the data structures, then you should keep the statement (with an extra note about proofs written or tests performed). -- /NAD

On Apr 5, 2006, at 5:12 AM, Nils Anders Danielsson wrote:
On Wed, 05 Apr 2006, Robert Dockins
wrote: The 3rd release candidate for Edison 1.2 is now avaliable.
In the documentation you write:
An instance of an abstract data type is considered indistinguishable from another if all possible applications of unambiguous operations to both yield indistinguishable results. In this context, we consider bottom to be distinguishable from non-bottom results, and indistinguishable from other bottom results.
I believe this wording is somewhat too strong. It is usually very hard to achieve what you are claiming above, since (among other things) bottom and const bottom are distinguishable.
Ugh! Of course, you are right (eta-conversion bites again!). Most of this paragraph is left-over from when I was using the terms 'ambiguous' vs 'well-defined' rather than 'ambiguous' vs 'unambiguous'. I felt that I needed to say that 'well-defined' != 'not bottom', and that meant I had to talk about bottom. Now, I'm using 'unambiguous' which doesn't have that particular confusion, I should probably rewrite that paragraph.
Typically when Haskellers claim various properties for their functions they ignore bottoms (and often also infinite values), reasoning in a "fast and loose" way. This is seldom explicitly mentioned, though, so I guess many may not be aware of this. Your statement makes it seem as if you have done more than the ordinary, fast and loose, reasoning, and I find it misleading.
So you think it would be clearer if I just ignored bottom as well? That's certainly easier on me; it means people can't start providing nasty counterexamples to my function contracts using 'seq'!
Of course, if you have actually gone to the trouble of verifying your statement for all the data structures, then you should keep the statement (with an extra note about proofs written or tests performed).
No, this has not been done. I have thought about employing Programmatica to formalize the contracts and verify implementations, but its just a pipe-dream for now.
-- /NAD
Thanks for pointing that out (and for letting me know that at least one person out there reads the documentation! ;-) ) Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

On Wed, 05 Apr 2006, Robert Dockins
So you think it would be clearer if I just ignored bottom as well?
It would be imprecise instead of wrong. :) Another option is to go for some sort of approximate semantics which still includes bottoms. You could for instance assume that bottom = const bottom, and state your results in that context. Verifying the results would still mean a lot of work, though. By the way, does your library have a QuickCheck test suite? In that case it is often not too hard to test properties involving bottoms. I have a library which may be of help: http://www.cs.chalmers.se/~nad/software/ChasingBottoms/docs/ -- /NAD

On Apr 5, 2006, at 11:34 AM, Nils Anders Danielsson wrote:
On Wed, 05 Apr 2006, Robert Dockins
wrote: So you think it would be clearer if I just ignored bottom as well?
It would be imprecise instead of wrong. :)
I suppose if you force the reader to evaluate documentation non- deterministically, then you can presume your documentation is correct if there is any correct interpretation they might have chosen ;-)
Another option is to go for some sort of approximate semantics which still includes bottoms. You could for instance assume that bottom = const bottom, and state your results in that context. Verifying the results would still mean a lot of work, though.
Humm. Well, I'd actually prefer not to have to say anything about bottom. I'm really only interested in telling the user that output of a function is not completely determined by the contract. <shakes fist> Dang you bottom! I'll have to think some more about what to do here and see if there's a nice way to sidestep the issue entirely.
By the way, does your library have a QuickCheck test suite?
Yes indeed.
In that case it is often not too hard to test properties involving bottoms. I have a library which may be of help: http://www.cs.chalmers.se/~nad/software/ChasingBottoms/docs/
Nice! I notice however, that it relies on exceptions and implicit parameters. I'm willing to accept less portability in the test suite than in the library proper, but I feel that implicit parameters are pushing it a bit, since they appear unlikely to make it into H'.
-- /NAD
Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

On Wed, 05 Apr 2006, Robert Dockins
I notice however, that it relies on exceptions and implicit parameters. I'm willing to accept less portability in the test suite than in the library proper, but I feel that implicit parameters are pushing it a bit, since they appear unlikely to make it into H'.
Ah, I've been meaning to get rid of the implicit parameters. There's now a new version, without implicit parameters. Exceptions are critical for the library, though. -- /NAD

On Apr 5, 2006, at 11:33 AM, Christian Maeder wrote:
Robert Dockins wrote:
Major changes in 1.2rc3 are: [..] * add Eq instances for concrete associated collections
Why not also Ord instances i.e. for UnbalancedSet
That's a good idea. However, I'm not sure what the "right" total order would be. The obvious thing that comes to me is the lexicographic order on the ascending listing of elements. Another attractive option is to extend the partial order defined by 'subset' into a total order in some way (any ideas)? <looks at standard lib docs> I see Data.Set has an Ord instance. I suppose to be practical I should adopt the ordering used by Data.Set. <looks some more> Ohh! So does Data.Map. I guess I should browse the source to see how those orders are defined. <aside> I think its a bit irritating that Haddock doesn't pay attention to docs attached to instances. It is sometimes nice to add a few sentences about how an instance is implemented.
Cheers Christian
Thanks! Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

Robert Dockins wrote:
Why not also Ord instances i.e. for UnbalancedSet
That's a good idea. However, I'm not sure what the "right" total order would be. The obvious thing that comes to me is the lexicographic order on the ascending listing of elements.
That's the one I would expect (and the one used by Data.Set and Data.Map) C.
participants (4)
-
Bulat Ziganshin
-
Christian Maeder
-
Nils Anders Danielsson
-
Robert Dockins