
It is somewhat of a surprise to me that I'm making this post, given that there was a day when I thought Haskell was moving too slow ;-) My problem here is that it has become rather difficult to write software in Haskell that will still work with newer compiler and library versions in future years. I have Python code of fairly significant complexity that only rarely requires any change due to language or library changes. This is not so with Haskel. Here is a prime example. (Name hidden because my point here isn't to single out one person.) This is a patch to old-locale: Wed Sep 24 14:37:55 CDT 2008 xxxxx@xxxxx.xxxx * Adding 'T' to conform better to standard This is based on information found at http://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations diff -rN -u old-old-locale/System/Locale.hs new-old-locale/System/Locale.hs --- old-old-locale/System/Locale.hs 2010-04-23 13:21:31.381619614 -0500 +++ new-old-locale/System/Locale.hs 2010-04-23 13:21:31.381619614 -0500 @@ -79,7 +79,7 @@ iso8601DateFormat mTimeFmt = "%Y-%m-%d" ++ case mTimeFmt of Nothing -> "" - Just fmt -> ' ' : fmt + Just fmt -> 'T' : fmt A one-character change. Harmless? No. It entirely changes what the function does. Virtually any existing user of that function will be entirely broken. Of particular note, it caused significant breakage in the date/time handling functions in HDBC. Now, one might argue that the function was incorrectly specified to begin with. But a change like this demands a new function; the original one ought to be commented with the situation. My second example was the addition of instances to time. This broke code where the omitted instances were defined locally. Worse, the version number was not bumped in a significant way to permit testing for the condition, and thus conditional compilation, via cabal. See http://bit.ly/cBDj3Q for more on that one. I could also cite the habit of Hackage to routinely get more and more pedantic, rejecting packages that uploaded fine previously; renaming the old exception model to OldException instead of introducing the new one with a different name (thus breaking much exception-using code), etc. My point is not that innovation in this community is bad. Innovation is absolutely good, and I don't seek to slow it down. But rather, my point is that stability has value too. If I can't take Haskell code written as little as 3 years ago and compile it on today's platform without errors, we have a problem. And there is a significant chunk of code that I work with that indeed wouldn't work in this way. I don't have a magic bullet to suggest here. But I would first say that this is a plea for people that commit to core libraries to please bear in mind the implications of what you're doing. If you change a time format string, you're going to break code. If you introduce new instances, you're going to break code. These are not changes that should be made lightly, and if they must be made (I'd say there's a stronger case for the time instances than the s/ /T/ change), then the version number must be bumped significantly enough to be Cabal-testable. I say this with a few hats. One, we use Haskell in business. Some of these are very long-term systems, that are set up once and they do their task for years. Finding that code has become uncompilable here is undesirable. Secondly, I'm a Haskell library developer myself. I try to maintain compatibility with GHC & platform versions dating back at least a few years with every release. Unfortunately, this has become nearly impossible due to the number of untestable API changes out there. That means that, despite my intent, I too am contributing to the problem. Thoughts? -- John

I'll just quickly mention one factor that contributes: * In 2.5 years we've gone from 10 libraries on Hackage to 2023 (literally!) That is a massive API to try to manage, hence the continuing move to focus on automated QA on Hackage, and automated tools -- no one wants to have to resolve those dependencies by hand. -- Don jgoerzen:
It is somewhat of a surprise to me that I'm making this post, given that there was a day when I thought Haskell was moving too slow ;-)
My problem here is that it has become rather difficult to write software in Haskell that will still work with newer compiler and library versions in future years. I have Python code of fairly significant complexity that only rarely requires any change due to language or library changes. This is not so with Haskel.
Here is a prime example. (Name hidden because my point here isn't to single out one person.) This is a patch to old-locale:
Wed Sep 24 14:37:55 CDT 2008 xxxxx@xxxxx.xxxx * Adding 'T' to conform better to standard This is based on information found at
http://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations diff -rN -u old-old-locale/System/Locale.hs new-old-locale/System/Locale.hs --- old-old-locale/System/Locale.hs 2010-04-23 13:21:31.381619614 -0500 +++ new-old-locale/System/Locale.hs 2010-04-23 13:21:31.381619614 -0500 @@ -79,7 +79,7 @@ iso8601DateFormat mTimeFmt = "%Y-%m-%d" ++ case mTimeFmt of Nothing -> "" - Just fmt -> ' ' : fmt + Just fmt -> 'T' : fmt
A one-character change. Harmless? No. It entirely changes what the function does. Virtually any existing user of that function will be entirely broken. Of particular note, it caused significant breakage in the date/time handling functions in HDBC.
Now, one might argue that the function was incorrectly specified to begin with. But a change like this demands a new function; the original one ought to be commented with the situation.
My second example was the addition of instances to time. This broke code where the omitted instances were defined locally. Worse, the version number was not bumped in a significant way to permit testing for the condition, and thus conditional compilation, via cabal. See http://bit.ly/cBDj3Q for more on that one.
I could also cite the habit of Hackage to routinely get more and more pedantic, rejecting packages that uploaded fine previously; renaming the old exception model to OldException instead of introducing the new one with a different name (thus breaking much exception-using code), etc.
My point is not that innovation in this community is bad. Innovation is absolutely good, and I don't seek to slow it down.
But rather, my point is that stability has value too. If I can't take Haskell code written as little as 3 years ago and compile it on today's platform without errors, we have a problem. And there is a significant chunk of code that I work with that indeed wouldn't work in this way.
I don't have a magic bullet to suggest here. But I would first say that this is a plea for people that commit to core libraries to please bear in mind the implications of what you're doing. If you change a time format string, you're going to break code. If you introduce new instances, you're going to break code. These are not changes that should be made lightly, and if they must be made (I'd say there's a stronger case for the time instances than the s/ /T/ change), then the version number must be bumped significantly enough to be Cabal-testable.
I say this with a few hats. One, we use Haskell in business. Some of these are very long-term systems, that are set up once and they do their task for years. Finding that code has become uncompilable here is undesirable.
Secondly, I'm a Haskell library developer myself. I try to maintain compatibility with GHC & platform versions dating back at least a few years with every release. Unfortunately, this has become nearly impossible due to the number of untestable API changes out there. That means that, despite my intent, I too am contributing to the problem.
Thoughts?
-- John _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Don Stewart wrote:
I'll just quickly mention one factor that contributes:
* In 2.5 years we've gone from 10 libraries on Hackage to 2023 (literally!)
That is a massive API to try to manage, hence the continuing move to focus on automated QA on Hackage, and automated tools -- no one wants to have to resolve those dependencies by hand.
Yep, it's massive, and it's exciting. We seem to have gone from stodgy old language to scrappy hot one. Which isn't a bad thing at all. Out of those 2023, there are certain libraries where small changes impact a lot of people (say base, time, etc.) I certainly don't expect all 2023 to be held to the same standard as base and time. We certainly need to have room in the community for libraries that change rapidly too. I'd propose a very rough measuring stick: anything in the platform ought to be carefully considered for introducing incompatibilities. Other commonly-used libraries, such as HaXML and HDBC, perhaps should fit in that criteria as well. -- John

On Fri, Apr 23, 2010 at 12:17 PM, John Goerzen
Out of those 2023, there are certain libraries where small changes impact a lot of people (say base, time, etc.) I certainly don't expect all 2023 to be held to the same standard as base and time. We certainly need to have room in the community for libraries that change rapidly too.
I'd really like to see hackage separated into a couple of separate instances based on general stability. I think it's wonderful that anyone can easily push a new app/library, and have it available to virtually everyone via cabal-install. However, that ability caters to a completely different use case than John's maintenance / production dev. scenario. Something akin to the Debian stable / unstable / testing division would be nice, so that production code can avoid dependencies on packages that are very quickly evolving, and so that those evolving packages have the freedom to move through a series of breaking API changes before settling on the "right" solution and moving to a more stable package store. --Rogan
I'd propose a very rough measuring stick: anything in the platform ought to be carefully considered for introducing incompatibilities. Other commonly-used libraries, such as HaXML and HDBC, perhaps should fit in that criteria as well.
-- John _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

jgoerzen:
Don Stewart wrote:
I'll just quickly mention one factor that contributes:
* In 2.5 years we've gone from 10 libraries on Hackage to 2023 (literally!)
That is a massive API to try to manage, hence the continuing move to focus on automated QA on Hackage, and automated tools -- no one wants to have to resolve those dependencies by hand.
Yep, it's massive, and it's exciting. We seem to have gone from stodgy old language to scrappy hot one. Which isn't a bad thing at all.
Out of those 2023, there are certain libraries where small changes impact a lot of people (say base, time, etc.) I certainly don't expect all 2023 to be held to the same standard as base and time. We certainly need to have room in the community for libraries that change rapidly too.
I'd propose a very rough measuring stick: anything in the platform ought to be carefully considered for introducing incompatibilities. Other commonly-used libraries, such as HaXML and HDBC, perhaps should fit in that criteria as well.
Oh, the Platform has very strict standards about APIs, When a package may be added: http://trac.haskell.org/haskell-platform/wiki/AddingPackages and once it is in, only non-API changing bug fixes are during a minor series (12 month release cycle). On 12 month (major) releases, packages may modify their API under the current standard. We just haven't been around long enough to have much of an effect. The 2..5 year plan is that significant new stability is reintroduced via the HP. -- Don

Don Stewart wrote:
Oh, the Platform has very strict standards about APIs,
When a package may be added:
http://trac.haskell.org/haskell-platform/wiki/AddingPackages
That looks like a very solid document. Does it also apply when considering upgrading a package already in the platform to a newer version? Also, I notice that http://haskell.org/haskellwiki/Package_versioning_policy does not mention adding new instances, which caused a problem in the past. I would argue that this ought to mandate at least a new A.B.C version, if not a new A.B. -- John

On Fri, Apr 23, 2010 at 10:11 PM, John Goerzen
Don Stewart wrote:
Oh, the Platform has very strict standards about APIs,
When a package may be added: http://trac.haskell.org/haskell-platform/wiki/AddingPackages
That looks like a very solid document. Does it also apply when considering upgrading a package already in the platform to a newer version?
Also, I notice that http://haskell.org/haskellwiki/Package_versioning_policy does not mention adding new instances, which caused a problem in the past. I would argue that this ought to mandate at least a new A.B.C version, if not a new A.B.
Adding or removing instances requires a new major version. See section 2:
| If any entity was removed, or the types of any entities or the definitions of
| datatypes or classes were changed, or instances were added or removed,
| then the new A.B must be greater than the previous A.B.
--
Dave Menendez

David Menendez wrote:
On Fri, Apr 23, 2010 at 10:11 PM, John Goerzen
wrote: Don Stewart wrote:
Oh, the Platform has very strict standards about APIs,
When a package may be added: http://trac.haskell.org/haskell-platform/wiki/AddingPackages That looks like a very solid document. Does it also apply when considering upgrading a package already in the platform to a newer version?
Also, I notice that http://haskell.org/haskellwiki/Package_versioning_policy does not mention adding new instances, which caused a problem in the past. I would argue that this ought to mandate at least a new A.B.C version, if not a new A.B.
Adding or removing instances requires a new major version. See section 2:
| If any entity was removed, or the types of any entities or the definitions of | datatypes or classes were changed, or instances were added or removed, | then the new A.B must be greater than the previous A.B.
Ah, sorry. Read it too quickly I guess.

jgoerzen:
David Menendez wrote:
On Fri, Apr 23, 2010 at 10:11 PM, John Goerzen
wrote: Don Stewart wrote:
Oh, the Platform has very strict standards about APIs,
When a package may be added: http://trac.haskell.org/haskell-platform/wiki/AddingPackages That looks like a very solid document. Does it also apply when considering upgrading a package already in the platform to a newer version?
Also, I notice that http://haskell.org/haskellwiki/Package_versioning_policy does not mention adding new instances, which caused a problem in the past. I would argue that this ought to mandate at least a new A.B.C version, if not a new A.B.
Adding or removing instances requires a new major version. See section 2:
| If any entity was removed, or the types of any entities or the definitions of | datatypes or classes were changed, or instances were added or removed, | then the new A.B must be greater than the previous A.B.
Ah, sorry. Read it too quickly I guess.
But remember, we don't have a tool to enforce or check the policy. And we didn't get a SoC applicant to design it. So there's a challenge to everyone. The tool-to-be is described here: http://donsbot.wordpress.com/2010/04/01/the-8-most-important-haskell-org-gso... In fact, I put it a #1 -- Don

On 24/04/2010, at 07:29, Don Stewart wrote:
Oh, the Platform has very strict standards about APIs,
What is an API? The package versioning policy only seems to talk about types and function signatures. John's old-locale example shows that this is not enough. Would it perhaps make sense for at least the Platform to require packages to have unit tests and to require versions to be bumped whenever those change (sufficiently)? Roman

Roman Leshchinskiy
On 24/04/2010, at 07:29, Don Stewart wrote:
Oh, the Platform has very strict standards about APIs,
What is an API? The package versioning policy only seems to talk about types and function signatures. John's old-locale example shows that this is not enough.
I would think that the API is all the functions/classes/datatypes/instances/etc. exported from the library in combination with their types.
Would it perhaps make sense for at least the Platform to require packages to have unit tests and to require versions to be bumped whenever those change (sufficiently)?
I don't get this; just because someone changes a unit test (because they thought of a new case, etc.) they should bump the package version even if all the changes were internal and not exported? -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

On 24/04/2010, at 18:06, Ivan Lazar Miljenovic wrote:
Roman Leshchinskiy
writes: On 24/04/2010, at 07:29, Don Stewart wrote:
Oh, the Platform has very strict standards about APIs,
What is an API? The package versioning policy only seems to talk about types and function signatures. John's old-locale example shows that this is not enough.
I would think that the API is all the functions/classes/datatypes/instances/etc. exported from the library in combination with their types.
So the semantics of those functions doesn't matter at all?
Would it perhaps make sense for at least the Platform to require packages to have unit tests and to require versions to be bumped whenever those change (sufficiently)?
I don't get this; just because someone changes a unit test (because they thought of a new case, etc.) they should bump the package version even if all the changes were internal and not exported?
Adding new tests (i.e., new postconditions) doesn't change the API. Loosening preconditions doesn't, either. Also, the tests would only cover the exposed part of the library, of course. Internal tests are of no concern to the library's clients. Roman

Roman Leshchinskiy
On 24/04/2010, at 18:06, Ivan Lazar Miljenovic wrote:
I would think that the API is all the functions/classes/datatypes/instances/etc. exported from the library in combination with their types.
So the semantics of those functions doesn't matter at all?
What do you refer to by "semantics"? Can you provide an example of when what you consider to be the API to change when the functions, types, etc. don't? -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

On 24/04/2010, at 18:54, Ivan Lazar Miljenovic wrote:
Roman Leshchinskiy
writes: On 24/04/2010, at 18:06, Ivan Lazar Miljenovic wrote:
I would think that the API is all the functions/classes/datatypes/instances/etc. exported from the library in combination with their types.
So the semantics of those functions doesn't matter at all?
What do you refer to by "semantics"? Can you provide an example of when what you consider to be the API to change when the functions, types, etc. don't?
John Goerzen gave one in the very first post of this thread: the fix to old-locale which didn't change any types but apparently changed the behaviour of a function quite drastically. Another example would be a change to the Ord instances for Float and Double which would have compare raise an exception on NaNs as discussed in a different thread on this list. Another one, which is admittedly silly but demonstrates my point, would be changing the implementation of map to map _ _ = [] In general, any significant tightening/changing of preconditions and loosening/changing of postconditions would qualify. Roman

Roman Leshchinskiy
John Goerzen gave one in the very first post of this thread: the fix to old-locale which didn't change any types but apparently changed the behaviour of a function quite drastically. Another example would be a change to the Ord instances for Float and Double which would have compare raise an exception on NaNs as discussed in a different thread on this list. Another one, which is admittedly silly but demonstrates my point, would be changing the implementation of map to
map _ _ = []
In general, any significant tightening/changing of preconditions and loosening/changing of postconditions would qualify.
OK, fair enough, I see how these can be considered changing the API. Thing is, whilst it would be possible in general to have a tool that determines if the API has changed based upon type signatures, etc. how would you go about automating the test for detecting if the "intention" of a function changes in this manner? -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

Hi On Sat, 2010-04-24 at 19:25 +1000, Ivan Lazar Miljenovic wrote:
Roman Leshchinskiy
writes: John Goerzen gave one in the very first post of this thread: the fix to old-locale which didn't change any types but apparently changed the behaviour of a function quite drastically. Another example would be a change to the Ord instances for Float and Double which would have compare raise an exception on NaNs as discussed in a different thread on this list. Another one, which is admittedly silly but demonstrates my point, would be changing the implementation of map to
map _ _ = []
In general, any significant tightening/changing of preconditions and loosening/changing of postconditions would qualify.
OK, fair enough, I see how these can be considered changing the API. Thing is, whilst it would be possible in general to have a tool that determines if the API has changed based upon type signatures, etc. how would you go about automating the test for detecting if the "intention" of a function changes in this manner?
You could automatically generate QuickCheck tests for many pure functions. It will not catch every API change, but it would catch some. It would have caught the API change that John mentioned. But automatically generating QuickCheck tests to test if funOld == funNew, would require quite a bit work. But better than everybody having to write unit tests, as other have proposed. /Mads

Mads Lindstrøm
You could automatically generate QuickCheck tests for many pure functions. It will not catch every API change, but it would catch some. It would have caught the API change that John mentioned.
As in comparing the old and the new functions?
But automatically generating QuickCheck tests to test if funOld == funNew, would require quite a bit work. But better than everybody having to write unit tests, as other have proposed.
Agreed; however, this has its own problems: you have to load up two versions of the same library and compare each pair of functions. Note that even this comparison isn't really legitimate: what happens if there was some edge/corner case that the old version didn't handle properly and thus the new one fixes it? The QC tests would say that they are different, even though they are actually the same. Even if we could get around this, there remains another problem: automatically generating the kind of input each function expects (how do you encode pre-conditions automatically for such a tool to pick up? What about pure parsing functions?). -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

Hi On Sat, 2010-04-24 at 19:47 +1000, Ivan Lazar Miljenovic wrote:
Mads Lindstrøm
writes: You could automatically generate QuickCheck tests for many pure functions. It will not catch every API change, but it would catch some. It would have caught the API change that John mentioned.
As in comparing the old and the new functions?
Yes
But automatically generating QuickCheck tests to test if funOld == funNew, would require quite a bit work. But better than everybody having to write unit tests, as other have proposed.
Agreed; however, this has its own problems: you have to load up two versions of the same library and compare each pair of functions.
Note that even this comparison isn't really legitimate: what happens if there was some edge/corner case that the old version didn't handle properly and thus the new one fixes it? The QC tests would say that they are different, even though they are actually the same.
But that would be an API change. It is of cause a question of how strict we want to be. I have seen several times that commercial software developers denies changes bugs, as the fix would constitute API changes. We obviously do not want to go there, but we could bump the version number as it would require little work.
Even if we could get around this, there remains another problem: automatically generating the kind of input each function expects (how do you encode pre-conditions automatically for such a tool to pick up? What about pure parsing functions?).
Yes, for some functions it would be useless. Library developers could specify test-input in these cases. This would require work of each library developer, but still less than writing unit tests. /Mads

Mads Lindstrøm
But that would be an API change. It is of cause a question of how strict we want to be. I have seen several times that commercial software developers denies changes bugs, as the fix would constitute API changes. We obviously do not want to go there, but we could bump the version number as it would require little work.
I would argue that it would involve bumping a minor version number, not a major version number (which indicates an API change) since it's just a bug fix. -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

On Sat, Apr 24, 2010 at 08:21:04PM +1000, Ivan Lazar Miljenovic wrote:
Mads Lindstrøm
writes: But that would be an API change. It is of cause a question of how strict we want to be. I have seen several times that commercial software developers denies changes bugs, as the fix would constitute API changes. We obviously do not want to go there, but we could bump the version number as it would require little work.
I would argue that it would involve bumping a minor version number, not a major version number (which indicates an API change) since it's just a bug fix.
I think we need a simpler, declarative, definition from which these details can be worked out, e.g. Same minor version: any code that worked with the old version will work with the new version, in the same way. Same major version: any code using fully explicit imports that worked with the old version will work with the new version, in the same way.

Ross Paterson
Same major version: any code using fully explicit imports that worked with the old version will work with the new version, in the same way.
By this you mean that additions have been made? -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

On Sat, Apr 24, 2010 at 08:48:07PM +1000, Ivan Lazar Miljenovic wrote:
Ross Paterson
writes: Same major version: any code using fully explicit imports that worked with the old version will work with the new version, in the same way.
By this you mean that additions have been made?
According to these definitions, adding exports would require a new minor version, while adding instances of already exported classes and types would require a new major version.

On 24/04/2010, at 19:25, Ivan Lazar Miljenovic wrote:
Roman Leshchinskiy
writes: John Goerzen gave one in the very first post of this thread: the fix to old-locale which didn't change any types but apparently changed the behaviour of a function quite drastically. Another example would be a change to the Ord instances for Float and Double which would have compare raise an exception on NaNs as discussed in a different thread on this list. Another one, which is admittedly silly but demonstrates my point, would be changing the implementation of map to
map _ _ = []
In general, any significant tightening/changing of preconditions and loosening/changing of postconditions would qualify.
OK, fair enough, I see how these can be considered changing the API. Thing is, whilst it would be possible in general to have a tool that determines if the API has changed based upon type signatures, etc. how would you go about automating the test for detecting if the "intention" of a function changes in this manner?
I have no idea. However, just requiring the Platform packages to provide tests for the exposed functionality and to consider those when changing version numbers would be a big step forward in my opinion. *Any* notion of pre/postconditions as part of the interface is better than none at all. BTW, the versioning policy doesn't seem to even mention semantics. Perhaps we could add some text to the effect that it is a good idea to bump the version number if the semantics changes sufficiently even if the types don't? Roman

Roman Leshchinskiy
BTW, the versioning policy doesn't seem to even mention semantics. Perhaps we could add some text to the effect that it is a good idea to bump the version number if the semantics changes sufficiently even if the types don't?
Sounds like a good idea; at a guess I would think that it hasn't already because if the semantics of a function change then in most cases developers would either change the function name or its type at the same time and so this situation hasn't arisen too many times (John's example to the contrary). -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

On Fri, Apr 23, 2010 at 12:17 PM, John Goerzen
Don Stewart wrote:
I'll just quickly mention one factor that contributes:
* In 2.5 years we've gone from 10 libraries on Hackage to 2023 (literally!)
That is a massive API to try to manage, hence the continuing move to focus on automated QA on Hackage, and automated tools -- no one wants to have to resolve those dependencies by hand.
Yep, it's massive, and it's exciting. We seem to have gone from stodgy old language to scrappy hot one. Which isn't a bad thing at all.
Out of those 2023, there are certain libraries where small changes impact a lot of people (say base, time, etc.) I certainly don't expect all 2023 to be held to the same standard as base and time. We certainly need to have room in the community for libraries that change rapidly too.
I'd propose a very rough measuring stick: anything in the platform ought to be carefully considered for introducing incompatibilities. Other commonly-used libraries, such as HaXML and HDBC, perhaps should fit in that criteria as well.
I feel your pain John... I try to use Haskell commercially, and sometimes run into things I'm either going to have to re-implement myself or wait for a fix and do some workaround. Luckily I *like* this language enough to care to do it, but it does present a bit of a problem when trying to break it into an ecosystem full of C/C++ and Java when I have to back-peddle to explain why something in this "new" language that's supposed to help solve some problems with the "old" languages is not a magic bullet. I think managers expect magic bullets and holy grails... sometimes they just end up with "holy cow"'s (or other more interesting 4 letter words) instead. Dave
-- John _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

David Leimbach wrote:
I think managers expect magic bullets and holy grails... sometimes they just end up with "holy cow"'s (or other more interesting 4 letter words) instead.
The good news for me, at least, is that *I* am the manager. (Yep, the company is small enough for that.) Actually, it should be stated that Haskell has still been a huge overall win for us, despite this. I by no means am contemplating a switch away from it because of this. It must be said, too, that our core library, while perhaps less stable than Python's, seems to me to be of a much higher quality. Or perhaps I'm jaded after 8 years (!) of working with imaplib.py... -- John

Don Stewart
I'll just quickly mention one factor that contributes:
* In 2.5 years we've gone from 10 libraries on Hackage to 2023 (literally!)
That is a massive API to try to manage, hence the continuing move to focus on automated QA on Hackage, and automated tools -- no one wants to have to resolve those dependencies by hand.
I think the "release early, release often" slogan is an affect on this as well: we encourage library writers to release once they have something that _works_ rather than waiting until it is perfect. The fact that we encourage smaller, more modular libraries over large monolithic ones also affects this. When considering Haskell vs Python, I wonder if the "stability" of Python's libraries is due to their relative maturity in that the "fundamental" libraries have had time to settle down. -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

ivan.miljenovic:
Don Stewart
writes: I'll just quickly mention one factor that contributes:
* In 2.5 years we've gone from 10 libraries on Hackage to 2023 (literally!)
That is a massive API to try to manage, hence the continuing move to focus on automated QA on Hackage, and automated tools -- no one wants to have to resolve those dependencies by hand.
I think the "release early, release often" slogan is an affect on this as well: we encourage library writers to release once they have something that _works_ rather than waiting until it is perfect. The fact that we encourage smaller, more modular libraries over large monolithic ones also affects this.
When considering Haskell vs Python, I wonder if the "stability" of Python's libraries is due to their relative maturity in that the "fundamental" libraries have had time to settle down.
Note also that the Python core libraries model is what we are now just starting to do via the Haskell Platform. We're far more immature in that respect -- our stable, core, blessed library suite only just had its 2nd release.

On Fri, Apr 23, 2010 at 4:49 PM, Don Stewart
ivan.miljenovic:
Don Stewart
writes: I'll just quickly mention one factor that contributes:
* In 2.5 years we've gone from 10 libraries on Hackage to 2023 (literally!)
That is a massive API to try to manage, hence the continuing move to focus on automated QA on Hackage, and automated tools -- no one wants to have to resolve those dependencies by hand.
I think the "release early, release often" slogan is an affect on this as well: we encourage library writers to release once they have something that _works_ rather than waiting until it is perfect. The fact that we encourage smaller, more modular libraries over large monolithic ones also affects this.
When considering Haskell vs Python, I wonder if the "stability" of Python's libraries is due to their relative maturity in that the "fundamental" libraries have had time to settle down.
Note also that the Python core libraries model is what we are now just starting to do via the Haskell Platform. We're far more immature in that respect -- our stable, core, blessed library suite only just had its 2nd release.
Others on this thread have suggested continual integration build systems for *all* of hackage as a way to help with stability. I would argue that CI comes at a high human cost in terms of establishing and maintaining a proper build environment. It's a non-trivial commitment. This mention of the Haskell Platform makes me think: What if we reduced the scope from all of Hackage to just the Haskell Platform? I bet that's the sweet spot in terms of effort in and value produced. The HP is the set of packages that we want to endorse and promote as stable. Hackage on the other hand, is a huge repository of everything that exists. I think this also helps naturally create the unstable/testing/stable distinction of packages that was suggested in this thread as well. We can think of Hackage as unstable, the next unreleased version of the platform as testing, and the most recent stable platform release as stable. Hmm...But who would be willing to take on the hard, tedious, and time consuming work of maintaining the CI build system? I think for this build system effort to really take off a group of a few deadicated volunteers would be necessary. Jason

dagit:
Hmm...But who would be willing to take on the hard, tedious, and time consuming work of maintaining the CI build system? I think for this build system effort to really take off a group of a few deadicated volunteers would be necessary.
CI for the HP would be really easy, and extremely high value. As I said on Reddit, the community saw the instability problem 2 years ago, and launched the Haskell Platform effort to address this -- by requiring stability, with long release cycles. However, we're only a few months in, so the impact isn't being felt yet. Wait till we've had 10 years of versioning stability, like the Python core has. Any steps volunteers can take to ensure the HP is both stable and comprehensive, to build that solid foundation, will be greatly appreciated. -- Don

Ivan Lazar Miljenovic wrote:
I think the "release early, release often" slogan is an affect on this as well: we encourage library writers to release once they have something that _works_ rather than waiting until it is perfect. The fact that we encourage smaller, more modular libraries over large monolithic ones also affects this.
That is absolutely a good thing for many libraries here. I'm all in favor of low barriers to entry, and took advantage of such when I was starting out in this community. And I thank those many of you that have been around longer than I for putting up with my early code ;-) On the other hand, there are certain libraries that are very well-established and so popular that they are viewed by many as pretty much part of the language. Here I think of ones such as old-time or time, unix, bytestring, containers, etc. I think that if "release early & often" is to be practiced with these, then there ought to be a separate stable branch that is made widely available, with development releases numbered differently (as the Linux kernel used to do) or only available via version control.
When considering Haskell vs Python, I wonder if the "stability" of Python's libraries is due to their relative maturity in that the "fundamental" libraries have had time to settle down.
It is a funny thing, because our fundamental libraries *have* had time to settle down, in a sense. In another sense, I must say that the innovations we have seen recently have been sorely needed and are unquestionably a good thing. The new time, exceptions, regex improvements, Unicode support in IO, etc. are all things of immediate practical benefit. I guess this is the price of failing to avoid success, to borrow Simon's phrase. And again, not entirely bad. Incidentally, I think that the introduction of the new time was handled very well. No old code had to change (except perhaps for the .cabal file), and yet new development could ignore old-time. My intent here wasn't to stir up some grand new level of QC. Just to request a bit more forethought before changing APIs. -- John

On 2010-04-24, John Goerzen
It is a funny thing, because our fundamental libraries *have* had time to settle down, in a sense. In another sense, I must say that the innovations we have seen recently have been sorely needed and are unquestionably a good thing.
Overall, agreed. It still makes it a pain to write to the current standard, because it is moving.
Unicode support in IO,
This was "just" a bugfix in GHC, made more painful by people writing code dependent on the old behaviour.
I guess this is the price of failing to avoid success, to borrow Simon's phrase. And again, not entirely bad.
I despair that a better Numeric hierarchy will never make it into Haskell. -- Aaron Denney -><-

On 27 April 2010 14:55, Aaron Denney
I despair that a better Numeric hierarchy will never make it into Haskell.
I think the reason it hasn't is because I for one still haven't seen a fully implemented such hierarchy that's worth using. Then again, most of my numerical calculations are very basic; yay for combinatorics! :p -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

On Apr 27, 2010, at 00:55 , Aaron Denney wrote:
I despair that a better Numeric hierarchy will never make it into Haskell.
I thought the main reason for that was that nobody could agree on a "better" hierarchy that was actually usable. (Nobody wants to chain 10 typeclasses together to get work done.) -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

On Tue, 27 Apr 2010, Brandon S. Allbery KF8NH wrote:
I despair that a better Numeric hierarchy will never make it into Haskell.
I thought the main reason for that was that nobody could agree on a "better" hierarchy that was actually usable. (Nobody wants to chain 10 typeclasses together to get work done.)
I think that there is a way to do this so that everyone can be happy, if hassled. But GHC already has what we need to define our own numeric preludes. What we don't have is any numeric prelude at all. We just have a prelude prelude. I think it would *really* help if we could break the Prelude down into component packages, that can be separately imported, so that we don't have nonsense like this: http://hackage.haskell.org/packages/archive/list-extras/latest/doc/html/Prel... And such a thing could be done without breaking any code -- each compiler already has it's own non-standard breakdown anyway. Friendly, --Lane

Christopher Lane Hinson schrieb:
On Tue, 27 Apr 2010, Brandon S. Allbery KF8NH wrote:
I despair that a better Numeric hierarchy will never make it into Haskell.
I thought the main reason for that was that nobody could agree on a "better" hierarchy that was actually usable. (Nobody wants to chain 10 typeclasses together to get work done.)
I think that there is a way to do this so that everyone can be happy, if hassled.
But GHC already has what we need to define our own numeric preludes. What we don't have is any numeric prelude at all.
NumericPrelude does not count as numeric prelude? :-( http://hackage.haskell.org/package/numeric-prelude/

I'm so sorry. I mean to say that there is no part of the standard prelude that is the "numeric" part. I was aware of the numeric-prelude package, which is good work and deserves recognition. Friendly, --Lane On Tue, 27 Apr 2010, Henning Thielemann wrote:
Christopher Lane Hinson schrieb:
On Tue, 27 Apr 2010, Brandon S. Allbery KF8NH wrote:
I despair that a better Numeric hierarchy will never make it into Haskell.
I thought the main reason for that was that nobody could agree on a "better" hierarchy that was actually usable. (Nobody wants to chain 10 typeclasses together to get work done.)
I think that there is a way to do this so that everyone can be happy, if hassled.
But GHC already has what we need to define our own numeric preludes. What we don't have is any numeric prelude at all.
NumericPrelude does not count as numeric prelude? :-( http://hackage.haskell.org/package/numeric-prelude/

On Fri, Apr 23, 2010 at 11:34 AM, John Goerzen
A one-character change. Harmless? No. It entirely changes what the function does. Virtually any existing user of that function will be entirely broken. Of particular note, it caused significant breakage in the date/time handling functions in HDBC.
Now, one might argue that the function was incorrectly specified to begin with. But a change like this demands a new function; the original one ought to be commented with the situation.
My second example was the addition of instances to time. This broke code where the omitted instances were defined locally. Worse, the version number was not bumped in a significant way to permit testing for the condition, and thus conditional compilation, via cabal. See http://bit.ly/cBDj3Q for more on that one.
This is of course in part due to a strength of cabal (remember that strengths and weaknesses tend to come together). Cabal discourages testing libraries/apis at configure time. The result is that version numbers need to encode this information. We don't (yet), have a tool to help detect when a change in version number is needed or what the next version should be. We leave this up to humans and it turns out, humans make mistakes :) Even once we have an automatic tool to enforce/check the package version policy, mistakes may still sneak in. I would expect the 'T' in the time format to be in this same category.** More about that below.
I don't have a magic bullet to suggest here. But I would first say that this is a plea for people that commit to core libraries to please bear in mind the implications of what you're doing. If you change a time format string, you're going to break code. If you introduce new instances, you're going to break code. These are not changes that should be made lightly, and if they must be made (I'd say there's a stronger case for the time instances than the s/ /T/ change), then the version number must be bumped significantly enough to be Cabal-testable.
While I haven't participated in the library proposal process myself, I was under the impression that Haskell has a fairly rigorous process in place for modifying the core libraries. Is the above change too small to for that process? Did that process simply fail here? Jason

1) Folks, what exactly is the situation with buildbots?
2) Easily available images for installing virtualized environments
could also ameliorate the pain, no?
wrt 1) It seems to me that in an ideal world you could have a
candidate for uploading to hackage, but before uploading you could
push a magic button and get a message within one hour stating whether
this broke any packages on hackage.
I start thinking about a (cloneable) community ec2 or slicehost server
for buildbot -- 72 bucks a month should be doable for a community in
the thousands.
Does this make sense? Is it within the realm of possibility? What
would it take to make it happen?
I made a baby step in the direction of 2) with
http://blog.patch-tag.com/2010/02/12/ec2-amis-for-gitit-happstack/
At the very least, what this means for me personally is that I (or
anybody) can spin up a working box with gitit installed on it, for
some past incarnation of gitit/happstack/haskell platform.
Now, what happens when you attempt to cabal update && cabal reinstall
gitit -- I dunno. Very possibly breakage.
But at least you get a working place to start from.
I can see how this might apply to other pain points that I have less
personal knowledge of, like glut (?) & many others.
thomas.
2010/4/23 Jason Dagit
On Fri, Apr 23, 2010 at 11:34 AM, John Goerzen
wrote: A one-character change. Harmless? No. It entirely changes what the function does. Virtually any existing user of that function will be entirely broken. Of particular note, it caused significant breakage in the date/time handling functions in HDBC.
Now, one might argue that the function was incorrectly specified to begin with. But a change like this demands a new function; the original one ought to be commented with the situation.
My second example was the addition of instances to time. This broke code where the omitted instances were defined locally. Worse, the version number was not bumped in a significant way to permit testing for the condition, and thus conditional compilation, via cabal. See http://bit.ly/cBDj3Q for more on that one.
This is of course in part due to a strength of cabal (remember that strengths and weaknesses tend to come together). Cabal discourages testing libraries/apis at configure time. The result is that version numbers need to encode this information. We don't (yet), have a tool to help detect when a change in version number is needed or what the next version should be. We leave this up to humans and it turns out, humans make mistakes :)
Even once we have an automatic tool to enforce/check the package version policy, mistakes may still sneak in. I would expect the 'T' in the time format to be in this same category. More about that below.
I don't have a magic bullet to suggest here. But I would first say that this is a plea for people that commit to core libraries to please bear in mind the implications of what you're doing. If you change a time format string, you're going to break code. If you introduce new instances, you're going to break code. These are not changes that should be made lightly, and if they must be made (I'd say there's a stronger case for the time instances than the s/ /T/ change), then the version number must be bumped significantly enough to be Cabal-testable.
While I haven't participated in the library proposal process myself, I was under the impression that Haskell has a fairly rigorous process in place for modifying the core libraries. Is the above change too small to for that process? Did that process simply fail here?
Jason
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Thomas Hartman wrote:
1) Folks, what exactly is the situation with buildbots?
If that's going to happen, then ideally we would have a way to run tests as part of the hackage acceptance process. For instance, the change to a time format string wouldn't break anything at compile time, but my HDBC test suite sure caught it. I can see difficulty with this, though, particularly with packages that are bindings to C libraries. -- John

Thomas Hartman wrote:
1) Folks, what exactly is the situation with buildbots?
If I'm not mistaken, the buildbots run *after* the package has been pushed to hackage. Thats already too too late. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/

So the situation is
1) The buildbot will catch dependencies with compile errors, but only
after the package has been pushed, and there is no easy way for
packagers to check that this won't happen
2) There are many important packages that will pass a compile check
but not a runtime check.
Well, 2 seems like a hard problem to solve.
But 1) could be solved by having a candidate snapshot hackage that can
be cloned at will, and buildbotted against, no?
2010/4/23 Erik de Castro Lopo
Thomas Hartman wrote:
1) Folks, what exactly is the situation with buildbots?
If I'm not mistaken, the buildbots run *after* the package has been pushed to hackage. Thats already too too late.
Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Thomas Hartman
1) The buildbot will catch dependencies with compile errors, but only after the package has been pushed, and there is no easy way for packagers to check that this won't happen
Don't forget, there are valid reasons why hackage will sometimes fail to build a package: missing C libraries, wrong operating system, etc. Also, it would be nice if the build bot separates the building test from the haddock generation: I've seen quite a few packages where it builds successfully but then _haddock_ fails for one reason or another, thus breaking the package. -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

1) The buildbot will catch dependencies with compile errors, but only after the package has been pushed, and there is no easy way for packagers to check that this won't happen
An alternate solution that can be done completely outside the hackage loop: Set up a server to poll the "Source-Repository head" of every hackage package that includes one in it's cabal file, then rerun the build any time a change is detected. This may be a good excuse to implement a pluggable continuous integration server in haskell too along the lines of what Hudson is for java... maybe an idea for the next GSoC Best, Keith -- keithsheppard.name

Sorry, "rerun the build" means rebuild my package and all of my
package's dependencies...
On Fri, Apr 23, 2010 at 7:11 PM, Keith Sheppard
1) The buildbot will catch dependencies with compile errors, but only after the package has been pushed, and there is no easy way for packagers to check that this won't happen
An alternate solution that can be done completely outside the hackage loop:
Set up a server to poll the "Source-Repository head" of every hackage package that includes one in it's cabal file, then rerun the build any time a change is detected. This may be a good excuse to implement a pluggable continuous integration server in haskell too along the lines of what Hudson is for java... maybe an idea for the next GSoC
Best, Keith
-- keithsheppard.name

Keith Sheppard
Set up a server to poll the "Source-Repository head" of every hackage package that includes one in it's cabal file, then rerun the build any time a change is detected. This may be a good excuse to implement a pluggable continuous integration server in haskell too along the lines of what Hudson is for java... maybe an idea for the next GSoC
Several problems with this: 1) Is this going to support every VCS under the sun? 2) Not all head repositories are kept stable/buildable at all times. 3) I can see this getting expensive wrt space and network usage. -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

On Fri, Apr 23, 2010 at 7:17 PM, Ivan Lazar Miljenovic
Keith Sheppard
writes: Set up a server to poll the "Source-Repository head" of every hackage package that includes one in it's cabal file, then rerun the build any time a change is detected. This may be a good excuse to implement a pluggable continuous integration server in haskell too along the lines of what Hudson is for java... maybe an idea for the next GSoC
Several problems with this:
1) Is this going to support every VCS under the sun?
We only need to support 3 or 4 to get 99% of the stuff on Hackage that lives in a VCS at all.
2) Not all head repositories are kept stable/buildable at all times.
Perfect, meet better. Wait, no no - aw goddammit Perfect! Why do you do this every single time? *You* get to mop the floor this time.
3) I can see this getting expensive wrt space and network usage.
Not really. I do much the same thing locally. Most repos hardly ever change. My own local repos - which includes all of patch-tag, a few GHCs, some intermediate builds, and whatnot - is about 4.5G. By my calculations* that's about 25¢ of hard-drive space. * http://forre.st/storage#sata -- gwern

Gwern Branwen
On Fri, Apr 23, 2010 at 7:17 PM, Ivan Lazar Miljenovic
wrote: 2) Not all head repositories are kept stable/buildable at all times.
Perfect, meet better. Wait, no no - aw goddammit Perfect! Why do you do this every single time? *You* get to mop the floor this time.
Sense: this makes none.
3) I can see this getting expensive wrt space and network usage.
Not really. I do much the same thing locally. Most repos hardly ever change.
My own local repos - which includes all of patch-tag, a few GHCs, some intermediate builds, and whatnot - is about 4.5G. By my calculations* that's about 25¢ of hard-drive space.
Yet we have enough complaints about code.haskell.org or hackage being down; you want _another_ possible machine to have people complaining about not being available? :p -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

What Gwern said for 1) and 3)
2) Not all head repositories are kept stable/buildable at all times.
Isn't it bad practice to not have a buildable repo? In any case package owners would be free to use or ignore the data as they like, but I'm pretty sure it would be useful to many. Best, Keith

Keith Sheppard
What Gwern said for 1) and 3)
2) Not all head repositories are kept stable/buildable at all times.
Isn't it bad practice to not have a buildable repo? In any case package owners would be free to use or ignore the data as they like, but I'm pretty sure it would be useful to many.
I often work on different sub-parts of my packages; as such the interactions between them might not work. I don't see why I should ensure that it does, as the repository is a _development environment_ not a release. That said, there are some projects which generally _are_ buildable/usable for their repositories (e.g. XMonad and XMonad-Contrib). But these are large multi-person projects with moving targets; the stuff i write is being developed by myself only. -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

On 24/04/10 01:34, Ivan Lazar Miljenovic wrote:
Keith Sheppard
writes: What Gwern said for 1) and 3)
2) Not all head repositories are kept stable/buildable at all times.
Isn't it bad practice to not have a buildable repo? In any case package owners would be free to use or ignore the data as they like, but I'm pretty sure it would be useful to many.
I often work on different sub-parts of my packages; as such the interactions between them might not work. I don't see why I should ensure that it does, as the repository is a _development environment_ not a release.
I don't think anyone will *force* you to behave in a particular way. If you don't see the use of a service like this one then don't use it. It's really that simple. The service could even demand opt-in to not waste time/energy on unstable repos.
That said, there are some projects which generally _are_ buildable/usable for their repositories (e.g. XMonad and XMonad-Contrib). But these are large multi-person projects with moving targets; the stuff i write is being developed by myself only.
Then there are developers like me, who even on small one-man projects try to keep the official/published repo buildable at all times. I keep my development environment local or sometimes in an un-advertised repo. /M -- Magnus Therning (OpenPGP: 0xAB4DFBA4) magnus@therning.org Jabber: magnus@therning.org http://therning.org/magnus identi.ca|twitter: magthe

On 23 April 2010 21:14, Jason Dagit
We don't (yet), have a tool to help detect when a change in version number is needed or what the next version should be. We leave this up to humans and it turns out, humans make mistakes :)
Hi All, Did anyone sign up to work on this as a GSOC project? Best wishes Stephen

stephen.tetley:
On 23 April 2010 21:14, Jason Dagit
wrote: [Snip] We don't (yet), have a tool to help detect when a change in version number is needed or what the next version should be. We leave this up to humans and it turns out, humans make mistakes :)
Hi All,
Did anyone sign up to work on this as a GSOC project?
No, there were no proposals to work on PVP, though we have a good one for Hackage 2.0, and a good one for 'cabal test'.

John Goerzen
It is somewhat of a surprise to me that I'm making this post, given that there was a day when I thought Haskell was moving too slow ;-)
My problem here is that it has become rather difficult to write software in Haskell that will still work with newer compiler and library versions in future years.
I have the same problem, except that I work so slowly that things have changed before I finish anything!
Here is a prime example. (Name hidden because my point here isn't to single out one person.) This is a patch to old-locale:
Wed Sep 24 14:37:55 CDT 2008 xxxxx@xxxxx.xxxx * Adding 'T' to conform better to standard This is based on information found at
http://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations diff -rN -u old-old-locale/System/Locale.hs new-old-locale/System/Locale.hs --- old-old-locale/System/Locale.hs 2010-04-23 13:21:31.381619614 -0500 +++ new-old-locale/System/Locale.hs 2010-04-23 13:21:31.381619614 -0500 @@ -79,7 +79,7 @@ iso8601DateFormat mTimeFmt = "%Y-%m-%d" ++ case mTimeFmt of Nothing -> "" - Just fmt -> ' ' : fmt + Just fmt -> 'T' : fmt
A one-character change. Harmless? No. It entirely changes what the function does. Virtually any existing user of that function will be entirely broken. Of particular note, it caused significant breakage in the date/time handling functions in HDBC.
Now, one might argue that the function was incorrectly specified to begin with.
It certainly was, and I said so in
According to http://www.w3.org/TR/NOTE-datetime (and other random sources), the iso format for date+time in one field is YYYY-MM-DDThh:mm:ssTZD (eg 1997-07-16T19:20:30+01:00), ie no spaces around the "T" but iso8601DateFormat outputs a space after the day if the timeFmt is not Nothing, so
formatTime System.Locale.defaultTimeLocale (System.Locale.iso8601DateFormat (Just "T%H:%M:%SZ")) (UTCTime (fromGregorian 2007 10 20) 26540)
yeilds "2007-10-20 T07:22:20Z". I reckon this is a bug, but at the very least it's not a good design. Please can we change
Just fmt -> ' ' : fmt
to
Just fmt -> fmt
? if someone wants a space, they can put it in the string, but it's a pain to take it out when you want the one field format with a T there.
Now, while that change would still have been incompatible, I think it would have hurt less. But the problem here is that no one noticed my message, and then someone else must have seen the problem and made the different change without checking through old messages. If I remember correctly, I didn't report it as a bug because (a) I had some problem accessing trac at the time and anyway "2007-10-20 07:22:20Z" is acceptable as a two field format, so it was a feature request (make it possible to output the correct single field format). But no discussion ensued. My own, related, gripe about Haskell libraries is that they seem very patchy; not adhering very well to relevant standards and not using Haskell's types enough. For your cited example, there is an applicable standard, so it should have been written to adhere to it and properly typed formats are rather awkward to do, so that can be forgiven. But in many other places Strings are used, with the effect that there's no typechecking at all. An example I came across yesterday: I wanted to read a file modification time and use that as the If-Modified-Since header in an HTTP request. System.getModificationTime returns one type of date, and I couldn't find how to format that correctly (There's old-time and new time -- which one do I choose for future compatibility? And how /do/ I convert that date to something I can format for http?), but more to the point, I shouldn't have to care: the IF-Modified-Since header in HTTP expects a date, and I got a date from the system, so it should be typed that way; converting it to the right format for HTTP is simple-HTTP's job. -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk http://www.chaos.org.uk/~jf/Stuff-I-dont-want.html (updated 2009-01-31)

John Goerzen schrieb:
My second example was the addition of instances to time.
My conclusion was: Never define orphan instances privately. If an instance cannot be added to the packages that provide the associated type or the class, then discuss the orphan instance with the maintainers of the type and the class and setup a package that provides that instance. http://www.haskell.org/haskellwiki/Orphan_instance

Henning Thielemann
My conclusion was: Never define orphan instances privately. If an instance cannot be added to the packages that provide the associated type or the class, then discuss the orphan instance with the maintainers of the type and the class and setup a package that provides that instance.
So you recommend having packages specifically for instances? My main problem with this is if you want a custom variant of that instance. Let's take FGL graphs for example with instances for QuickCheck's Arbitrary class. Maybe you want arbitrary graphs that are simple, or maybe multiple edges are fine. Even when considering Arbitrary instances for something like String you may wish to have a custom variant that makes sense for what you're testing. My conclusion: it is not possible to have hard-and-fast conclusions for things like this :p -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

Ivan Lazar Miljenovic schrieb:
Henning Thielemann
writes: My conclusion was: Never define orphan instances privately. If an instance cannot be added to the packages that provide the associated type or the class, then discuss the orphan instance with the maintainers of the type and the class and setup a package that provides that instance.
So you recommend having packages specifically for instances?
My main problem with this is if you want a custom variant of that instance. Let's take FGL graphs for example with instances for QuickCheck's Arbitrary class. Maybe you want arbitrary graphs that are simple, or maybe multiple edges are fine. Even when considering Arbitrary instances for something like String you may wish to have a custom variant that makes sense for what you're testing.
Especially multiple instances warrant a lot of trouble: http://www.haskell.org/haskellwiki/Multiple_instances If there isn't one natural instance, then define no instance at all and instead define 'newtypes' and instances on them.

On Mon, Apr 26, 2010 at 06:47:27AM +1000, Ivan Lazar Miljenovic wrote:
My main problem with this is if you want a custom variant of that instance. Let's take FGL graphs for example with instances for QuickCheck's Arbitrary class. Maybe you want arbitrary graphs that are simple, or maybe multiple edges are fine. Even when considering Arbitrary instances for something like String you may wish to have a custom variant that makes sense for what you're testing.
Being able to use different instances in different contexts would be handy, and there have been some proposed extensions to allow this. But in the current state of Haskell, orphan instances are unworkable -- they invariably lead to trouble sooner or later.

On 04/25/2010 03:47 PM, Ivan Lazar Miljenovic wrote:
So you recommend having packages specifically for instances?
My main problem with this is if you want a custom variant of that instance. Let's take FGL graphs for example with instances for QuickCheck's Arbitrary class. Maybe you want arbitrary graphs that are simple, or maybe multiple edges are fine. Even when considering Arbitrary instances for something like String you may wish to have a custom variant that makes sense for what you're testing.
My conclusion: it is not possible to have hard-and-fast conclusions for things like this :p
I'm inclined to agree. As an example, there is the convertible library. It grew out of the need to make an easy way to map Haskell to database types in HDBC, and these days is a more general way to convert from one type to another. I provide a bunch of Convertible instances, but they are in separate modules, and thus can be omitted if a person doesn't want the instances. As an example: what's the correct way to convert a Double to an Integer? As an example, Prelude defines at least 4: ceiling, floor, truncate, and round. Now, in a certain sense, Convertible is designed for people that don't care which is used. (And yes, that is a perfectly valid answer in some cases.) But if you want your own, you can simply not import the numeric Convertible instances. It would, however, be nice if the language allowed you to override default instances with the code in your own package. -- John

On Apr 26, 2010, at 09:33 , John Goerzen wrote:
It would, however, be nice if the language allowed you to override default instances with the code in your own package.
Many other languages refer to this kind of thing as "monkey patching" and warn against it because of the problems it causes. -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH
participants (22)
-
Aaron Denney
-
Brandon S. Allbery KF8NH
-
Christopher Lane Hinson
-
David Leimbach
-
David Menendez
-
Don Stewart
-
Erik de Castro Lopo
-
Gwern Branwen
-
Henning Thielemann
-
Ivan Lazar Miljenovic
-
Ivan Miljenovic
-
Jason Dagit
-
John Goerzen
-
Jon Fairbairn
-
Keith Sheppard
-
Mads Lindstrøm
-
Magnus Therning
-
Rogan Creswick
-
Roman Leshchinskiy
-
Ross Paterson
-
Stephen Tetley
-
Thomas Hartman