RE: GHC 7.8 release?

(a) There are packages which tend to track GHC's latest version instead of the HP (yesod used to do this, which was a source of much pain).
(b) There are linux distributions which always track the latest everything, often in a rolling-release fashion (notably Arch). They are actively hostile to the Platform, and a source of even greater pain. Many package authors update because Arch users demand it and openly insult anyone who points them to the Platform or any policy which suggests that anything other then the absolutely latest version is acceptable.
These must be social questions (what I was earlier calling “signposting”) rather than technical ones. For example, you say that (b) is not subject to any variety of reason, and yet no linux distribution tracks HEAD, does it? They don’t openly insult anyone who points to a release just because HEAD has new cool stuff! No, they track things we call “releases”. Very well, maybe we should call them “previews” instead, and only dignify it as a “release” when, and only when a preview is picked by HP as worthy of incorporation in the next HP.
Or something. I’m just looking for a way to reconcile
· Release early, release often
· Stability for the Haskell Platform
It seems to me that such a reconciliation is within reach, and is actually very close to what we do, if we only signpost what is what far more vigorously and clearly than we do now. But maybe I’m wrong.
Simon
From: Brandon Allbery [mailto:allbery.b@gmail.com]
Sent: 11 February 2013 01:15
To: Simon Peyton-Jones
Cc: Simon Marlow; Mark Lentczner; Manuel M T Chakravarty; kostirya@gmail.com; glasgow-haskell-users; ghc-devs@haskell.org; Edsko de Vries
Subject: Re: GHC 7.8 release?
On Sun, Feb 10, 2013 at 4:02 PM, Simon Peyton-Jones

Agreed.
having relatively bug free "technology preview" releases, which (perhaps
ideally) have new functionality included in a way that keeps the breakage
overhead lowish, on a regular basis, is ideal.
one thought on the api hacking front:
the main concern we're hitting is that we want to not "pin" internal GHC
apis, yet we want to make the breakage rate on libraries people may want to
use that might depend on say GHC.Prim or GHC.TH to be minimal.
Is a possible solution that on preview releases we have the changed bits of
API for a module M to be exported in a module M.Experimental?
eg, new ghc primops in a tech preview release maybe are exported by
GHC.Prim.Experimental
(or something of this sort?)
just throwing out one possible point in the design space.
cheers
-Carter
On Mon, Feb 11, 2013 at 5:31 PM, Simon Peyton-Jones
(a) There are packages which tend to track GHC's latest version instead of the HP (yesod used to do this, which was a source of much pain).****
****
(b) There are linux distributions which always track the latest everything, often in a rolling-release fashion (notably Arch). They are actively hostile to the Platform, and a source of even greater pain. Many package authors update because Arch users demand it and openly insult anyone who points them to the Platform or any policy which suggests that anything other then the absolutely latest version is acceptable.****
** **
These must be *social* questions (what I was earlier calling “signposting”) rather than technical ones. For example, you say that (b) is not subject to any variety of reason, and yet no linux distribution tracks HEAD, does it? They don’t openly insult anyone who points to a release just because HEAD has new cool stuff! No, they track things we call “releases”. Very well, maybe we should call them “previews” instead, and only dignify it as a “release” when, and only when a preview is picked by HP as worthy of incorporation in the next HP. ****
** **
Or something. I’m just looking for a way to reconcile****
**· **Release early, release often****
**· **Stability for the Haskell Platform****
It seems to me that such a reconciliation is within reach, and is actually very close to what we do, if we only signpost what is what far more vigorously and clearly than we do now. But maybe I’m wrong.****
** **
Simon****
** **
*From:* Brandon Allbery [mailto:allbery.b@gmail.com] *Sent:* 11 February 2013 01:15 *To:* Simon Peyton-Jones *Cc:* Simon Marlow; Mark Lentczner; Manuel M T Chakravarty; kostirya@gmail.com; glasgow-haskell-users; ghc-devs@haskell.org; Edsko de Vries
*Subject:* Re: GHC 7.8 release?****
** **
On Sun, Feb 10, 2013 at 4:02 PM, Simon Peyton-Jones
wrote:**** What causes the "wave of package updates"? Just because GHC 7.8 (say) comes out, no package author need lift a finger. The Haskell Platform sets the pace for package updates. When the Haskell Platform comes out, now THAT is indeed a trigger for a wave of updates. Authors of packages in HP are forced to act; authors of other packages want their packages to work with the next HP.****
** **
(a) There are packages which tend to track GHC's latest version instead of the HP (yesod used to do this, which was a source of much pain).****
****
(b) There are linux distributions which always track the latest everything, often in a rolling-release fashion (notably Arch). They are actively hostile to the Platform, and a source of even greater pain. Many package authors update because Arch users demand it and openly insult anyone who points them to the Platform or any policy which suggests that anything other then the absolutely latest version is acceptable.****
** **
You *might* be able to control expectations with respect to (a); (b) is not subject to any variety of reason. It will produce as much pressure as it has users, plus multiply that pressure by the number of package authors who are also users.****
** **
-- ****
brandon s allbery kf8nh sine nomine associates****
allbery.b@gmail.com ballbery@sinenomine.net****
unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net****
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs

Hi, I think reducing breakages is not necessarily, and maybe not even primarily, an issue of releases. It's more about realizing that the cost of breaking things (e.g. changing library APIs) has gone up as the Haskell community and ecosystem has grown. We need to be conscious of that and carefully consider if making a breaking change (e.g. changing a function instead of adding a new function) is really necessary. Many platforms (e.g. Java and Python) rarely, if ever, make breaking changes. If you look at compiler projects (e.g. LLVM and GCC) you never see intentional breakages, even in major releases*. Here's a question I think we should be asking ourselves: why is the major version of base bumped with every release? Is it really necessary to make breaking changes this often? How about aiming for having GHC 7.10 be a release where we only add new stuff and improve old stuff? -- Johan * A major GCC release usually signifies that some large change to the code generator was made.

On Mon, 11 Feb 2013 15:03:25 -0800 Johan Tibell
Many platforms (e.g. Java and Python) rarely, if ever, make breaking changes. If you look at compiler projects (e.g. LLVM and GCC) you never see intentional breakages, even in major releases*.
Those are very mature platforms, hundreds of millions of people
use them indirectly each day, it's hard to compare GHC to them.
If anything, Haskell is very young for its age, and should rather move
faster. Bad mistakes, accidents and partial or inefficient
implementations proliferate in standard libraries for decades,
tampering GHC's growth as a serious production language.
On Mon, 11 Feb 2013 22:31:53 +0000 Simon Peyton-Jones
no linux distribution tracks HEAD, does it?
Gentoo packages HEAD just fine.

On Mon, Feb 11, 2013 at 5:03 PM, Johan Tibell
Hi,
I think reducing breakages is not necessarily, and maybe not even primarily, an issue of releases. It's more about realizing that the cost of breaking things (e.g. changing library APIs) has gone up as the Haskell community and ecosystem has grown. We need to be conscious of that and carefully consider if making a breaking change (e.g. changing a function instead of adding a new function) is really necessary. Many platforms (e.g. Java and Python) rarely, if ever, make breaking changes. If you look at compiler projects (e.g. LLVM and GCC) you never see intentional breakages, even in major releases*. Here's a question I think we should be asking ourselves: why is the major version of base bumped with every release? Is it really necessary to make breaking changes this often? How about aiming for having GHC 7.10 be a release where we only add new stuff and improve old stuff?
-- Johan
* A major GCC release usually signifies that some large change to the code generator was made.
I have some experience with GCC releases -- having served as a GCC Release Manager for several years. In fact, the release scheme we currently have has gone through several iterations -- usually after many "existential" crisis. Yes, we don't break GCC ABI lightly, mostly because GCC isn't a research compiler and most "research works" are done on forgotten branches that nobody cares about anymore. Implementing new standards (e.g. moving from C++03 to C++11 that has several mandated API and ABI breakage) is a royal pain that isn't worth replicating in GHC -- at least if you want GHC to remain a research compiler. Concerning your question about release number, I would venture that there is a certain "marketing" aspect to it. I can tell you that we, the GCC community, are very poor at that -- otherwise, we would have been at version 26 or something :-) -- Gaby

On Mon, Feb 11, 2013 at 4:34 PM, Gabriel Dos Reis < gdr@integrable-solutions.net> wrote:
I have some experience with GCC releases -- having served as a GCC Release Manager for several years. In fact, the release scheme we currently have has gone through several iterations -- usually after many "existential" crisis. Yes, we don't break GCC ABI lightly, mostly because GCC isn't a research compiler and most "research works" are done on forgotten branches that nobody cares about anymore. Implementing new standards (e.g. moving from C++03 to C++11 that has several mandated API and ABI breakage) is a royal pain that isn't worth replicating in GHC -- at least if you want GHC to remain a research compiler.
Concerning your question about release number, I would venture that there is a certain "marketing" aspect to it. I can tell you that we, the GCC community, are very poor at that -- otherwise, we would have been at version 26 or something :-)
Thanks for sharing! My perspective is of course as a user. I don't think I've ever run into a case where the compiler broken a previous work e.g. C++ program. On the other hand I have to make a release of most of the libraries I maintain with every GHC release (to bump cabal version constraints to accept the new base version, if nothing else). -- Johan

On Mon, Feb 11, 2013 at 6:37 PM, Johan Tibell
On Mon, Feb 11, 2013 at 4:34 PM, Gabriel Dos Reis
wrote: I have some experience with GCC releases -- having served as a GCC Release Manager for several years. In fact, the release scheme we currently have has gone through several iterations -- usually after many "existential" crisis. Yes, we don't break GCC ABI lightly, mostly because GCC isn't a research compiler and most "research works" are done on forgotten branches that nobody cares about anymore. Implementing new standards (e.g. moving from C++03 to C++11 that has several mandated API and ABI breakage) is a royal pain that isn't worth replicating in GHC -- at least if you want GHC to remain a research compiler.
Concerning your question about release number, I would venture that there is a certain "marketing" aspect to it. I can tell you that we, the GCC community, are very poor at that -- otherwise, we would have been at version 26 or something :-)
Thanks for sharing! My perspective is of course as a user. I don't think I've ever run into a case where the compiler broken a previous work e.g. C++ program. On the other hand I have to make a release of most of the libraries I maintain with every GHC release (to bump cabal version constraints to accept the new base version, if nothing else).
-- Johan
I understand. Concerning GCC, it is true that the shear size of the user base and the audience of the compiler ("industrial strength") calls for a very conservative approach to ABI or API breaking. On the hand, that means that there are certain welcomed, beneficial changes that we cannot effect. For example, because libstdc++ has been an early adopter of a reference counted-based implementation of std::string, we could not switch to more efficient and more multithread-friendly implementation. That was has been contributed for years but has been sequestered in some branches and namespaces integrated with the rest of the library. That is a shame, but one that is unavoidable given the expectation of the GCC audience. It is not clear to me that GHC is ready for that kind of constraints. We are still describing the C++11 implementation as "experimental" because we "fear" that doing otherwise might commit ourselves to an ABI and API that we may need to break later -- possibly because of undetected bugs or because we have found implementations we like better. Of course, that causes some distress in the community because people would like to say "GCC implements C++11." Finally, we do break API, you have just been lucky :-) http://gcc.gnu.org/onlinedocs/libstdc++/manual/api.html But, we have also developed very elaborate scheme for symbol versioning and namespace associations to help users digest API breakages without tears. Symbol versioning is a very messy business. I am still of the opinion that the current issue with GHC and HP is social, and it can be resolved through communication and coordination between the two communities for the great good of the Haskell community. -- Gaby

Thanks for sharing! My perspective is of course as a user. I don't think I've ever run into a case where the compiler broken a previous work e.g. C++ program. On the other hand I have to make a release of most of the libraries I maintain with every GHC release (to bump cabal version constraints to accept the new base version, if nothing else).
Just don't set upper version of base on the packages when you are not sure they will break. Write tested ghc versions in comments instead. You can't install separate base for a given ghc, so why bother? According to PVP you need to use 'base < 4.7' in version, BUT IT IS INSANE How do you expect users to test new ghc release (preview, name it any way), if you require them to unpack every nonresolvable package and update depends by hands? It's very fun to check -HEAD version for fixed bugs in that respect. Luckily many devs are not that insane and use arbitrary 'base < 5' or 'base < 10', which will break randomly at an arbitrary base-5 release. -- Sergei

On 11/02/13 23:03, Johan Tibell wrote:
Hi,
I think reducing breakages is not necessarily, and maybe not even primarily, an issue of releases. It's more about realizing that the cost of breaking things (e.g. changing library APIs) has gone up as the Haskell community and ecosystem has grown. We need to be conscious of that and carefully consider if making a breaking change (e.g. changing a function instead of adding a new function) is really necessary. Many platforms (e.g. Java and Python) rarely, if ever, make breaking changes. If you look at compiler projects (e.g. LLVM and GCC) you never see intentional breakages, even in major releases*. Here's a question I think we should be asking ourselves: why is the major version of base bumped with every release? Is it really necessary to make breaking changes this often?
One reason for the major version bumps is that base is a big conglomeration of modules, ranging from those that hardly ever change (Prelude) to those that change frequently (GHC.*). For example, the new IO manager that is about to get merged in will force a major bump of base, because it changes GHC.Event. The unicode support in the IO library was similar: although it only added to the external APIs that most people use, it also changed stuff inside GHC.* that we expose for a few clients. The solution to this would be to split up base further, but of course doing that is itself a major upheaval. However, having done that, it might be more feasible to have non-API-breaking releases. Of course we do also make well-intentioned changes to libraries, via the library proposal process, and some of these break APIs. But it wouldn't do any harm to batch these up and defer them until the next API-changing release. It would be great to have a list of the changes that had gone into base in the last few major releases, any volunteers? Cheers, Simon

On 02/12/2013 09:37 AM, Simon Marlow wrote:
On 11/02/13 23:03, Johan Tibell wrote:
Hi,
Of course we do also make well-intentioned changes to libraries, via the library proposal process, and some of these break APIs. But it wouldn't do any harm to batch these up and defer them until the next API-changing release.
Indeed. It might even be preferable to just have one "huge" breakage every year than having lots of minor breakages which require updating packages (and dependencies). At least you'll feel you're getting your money's worth.
participants (8)
-
Bardur Arantsson
-
Carter Schonwald
-
Gabriel Dos Reis
-
Johan Tibell
-
kudah
-
Sergei Trofimovich
-
Simon Marlow
-
Simon Peyton-Jones