
Hi devs, An interesting topic came up over dinner tonight: what if GHC made more releases? As an extreme example, we could release a new point version every time a bug fix gets merged to the stable branch. This may be a terrible idea. But what's stopping us from doing so? The biggest objection I can see is that we would want to make sure that users' code would work with the new version. Could the Stackage crew help us with this? If they run their nightly build with a release candidate and diff against the prior results, we would get a pretty accurate sense of whether the bugfix is good. If this test succeeds, why not release? Would it be hard to automate the packaging/posting process? The advantage to more releases is that it gets bugfixes in more hands sooner. What are the disadvantages? Richard PS: I'm not 100% sold on this idea. But I thought it was interesting enough to raise a broader discussion.

It's definitely an interesting idea. From the Stackage side: I'm happy to
provide testing and, even better, support to get some automated Stackage
testing tied into the GHC release process. (Why not be more aggressive? We
could do some CI against Stackage from the 7.10 branch on a regular basis.)
I like the idea of getting bug fixes out to users more frequently, so I'm
definitely +1 on the discussion. Let me play devil's advocate though:
having a large number of versions of GHC out there can make it difficult
for library authors, package curators, and large open source projects, due
to variety of what people are using. If we end up in a world where
virtually everyone ends up on the latest point release in a short
timeframe, the problem is reduced, but most of our current installation
methods are not amenable to that. We need to have a serious discussion
about how Linux distros, Haskell Platform, minimal installers, and so on
would address this shift. (stack would be able to adapt to this easily
since it can download new GHCs as needed, but users may not like having
100MB installs on a daily basis ;).)
What I would love to see is that bug fixes are regularly backported to the
stable GHC release and that within a reasonable timeframe are released,
where reasonable is some value we can discuss and come to consensus on.
I'll say that at the extremes: I think a week is far too short, and a year
is far too long.
On Tue, Sep 1, 2015 at 9:45 AM, Richard Eisenberg
Hi devs,
An interesting topic came up over dinner tonight: what if GHC made more releases? As an extreme example, we could release a new point version every time a bug fix gets merged to the stable branch. This may be a terrible idea. But what's stopping us from doing so?
The biggest objection I can see is that we would want to make sure that users' code would work with the new version. Could the Stackage crew help us with this? If they run their nightly build with a release candidate and diff against the prior results, we would get a pretty accurate sense of whether the bugfix is good. If this test succeeds, why not release? Would it be hard to automate the packaging/posting process?
The advantage to more releases is that it gets bugfixes in more hands sooner. What are the disadvantages?
Richard
PS: I'm not 100% sold on this idea. But I thought it was interesting enough to raise a broader discussion. _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

having a large number of versions of GHC out there can make it difficult for library authors, package curators, and large open source projects, due to variety of what people are using.
For point releases, if we do it right, this *should* not happen, since the changes *should* be backwards-compatible and so testing against the oldest release on the current major version *should* mean all subsequent point releases work as well. IMHO, any violation of this assumption *should* be considered a (serious) bug.

On 2015-09-01 at 08:45:40 +0200, Richard Eisenberg wrote:
An interesting topic came up over dinner tonight: what if GHC made more releases? As an extreme example, we could release a new point version every time a bug fix gets merged to the stable branch. This may be a terrible idea. But what's stopping us from doing so?
The biggest objection I can see is that we would want to make sure that users' code would work with the new version. Could the Stackage crew help us with this? If they run their nightly build with a release candidate and diff against the prior results, we would get a pretty accurate sense of whether the bugfix is good. If this test succeeds, why not release? Would it be hard to automate the packaging/posting process?
The advantage to more releases is that it gets bugfixes in more hands sooner. What are the disadvantages?
I'd say mostly organisational overhead which can't be fully automated (afaik, Ben has already automated large parts but not everything can be): - Coordinating with people creating and testing the bindists - Writing releases notes & announcment - Coordinating with the HP release process (which requires separate QA) - If bundled core-libraries are affected, coordination overhead with package maintainers (unless GHC HQ owned), verifying version bumps (API diff!) and changelogs have been updated accordingly, uploading to Hackage - Uploading and signing packagees to download.haskell.org, and verifying the downloads Austin & Ben probably have more to add to this list That said, doing more stable point releases is certainly doable if the bugs fixed are critical enough. This is mostly a trade-off between time spent on getting GHC HEAD in shape for the next major release (whose release-schedules suffer from time delays anyway) vs. maintaining a stable branch. Cheers, hvr

On Sep 1, 2015, at 12:01 AM, Herbert Valerio Riedel
I'd say mostly organisational overhead which can't be fully automated (afaik, Ben has already automated large parts but not everything can be):
- Coordinating with people creating and testing the bindists
This was the sort of thing I thought could be automated. I'm picturing a system where Austin/Ben hits a button and everything whirs to life, creating, testing, and posting bindists, with no people involved.
- Writing releases notes & announcment
Release notes should, theoretically, be updated with the patches. Announcement can be automated.
- Coordinating with the HP release process (which requires separate QA)
I'm sure others will have opinions here, but I guess I was thinking that the HP wouldn't be involved. These tiny releases could even be called something like "7.10.2 build 18". The HP would get updated only when we go to 7.10.3. Maybe we even have a binary compatibility requirement between tiny releases -- no interface file changes! Then a user's package library doesn't have to be recompiled when updating. In theory, other than the bugfixes, two people with different "builds" of GHC should have the same experience.
- If bundled core-libraries are affected, coordination overhead with package maintainers (unless GHC HQ owned), verifying version bumps (API diff!) and changelogs have been updated accordingly, uploading to Hackage
Any library version change would require a more proper release. Do these libraries tend to change during a major release cycle?
- Uploading and signing packagees to download.haskell.org, and verifying the downloads
This isn't automated?
Austin & Ben probably have more to add to this list
I'm sure they do. Again, I'd be fine if the answer from the community is "it's just not what we need". But I wanted to see if there were technical/practical/social reasons why this was or wasn't a good idea. If we do think it's a good idea absent those reasons, then we can work on addressing those concerns. Richard
That said, doing more stable point releases is certainly doable if the bugs fixed are critical enough. This is mostly a trade-off between time spent on getting GHC HEAD in shape for the next major release (whose release-schedules suffer from time delays anyway) vs. maintaining a stable branch.
Cheers, hvr

Richard Eisenberg
On Sep 1, 2015, at 12:01 AM, Herbert Valerio Riedel
wrote: I'd say mostly organisational overhead which can't be fully automated (afaik, Ben has already automated large parts but not everything can be):
- Coordinating with people creating and testing the bindists
This was the sort of thing I thought could be automated. I'm picturing a system where Austin/Ben hits a button and everything whirs to life, creating, testing, and posting bindists, with no people involved.
I can nearly do this for Linux with my existing tools. I can do 32- and 64-bit builds for both RedHat and Debian all on a single Debian 8 machine with the tools I developed during the course of the 7.10.2 release [1]. Windows is unfortunately still a challenge. I did the 7.10.2 builds on an EC2 instance and the experience wasn't terribly fun. I would love for this to be further automated but I've not done this yet.
- Writing releases notes & announcment
Release notes should, theoretically, be updated with the patches. Announcement can be automated.
If I'm doing my job well the release notes shouldn't be a problem. I've been trying to be meticulous about ensuring that all new features come with acceptable release notes.
- If bundled core-libraries are affected, coordination overhead with package maintainers (unless GHC HQ owned), verifying version bumps (API diff!) and changelogs have been updated accordingly, uploading to Hackage
Any library version change would require a more proper release. Do these libraries tend to change during a major release cycle?
The core libraries are perhaps the trickiest part of this. Currently the process goes something like this, 1. We branch off a stable GHC release 2. Development continues on `master`, eventually a breaking change is merged to one of the libraries 3. Eventually someone notices and bumps the library's version 4. More breaking changes are merged to the library 5. We branch off for another stable release, right before the release we manually push the libraries to Hackage 6. Repeat from (2) There can potentially be a lot of interface churn between steps 3 and 5. If we did releases in this period we would need to be much more careful about library versioning. I suspect this may end up being quite a bit of work to do properly. Technically we could punt on this problem and just do the same sort of stable/unstable versioning for the libraries that we already do with GHC itself. This would mean, however, that we couldn't upload the libraries to Hackage.
- Uploading and signing packagees to download.haskell.org, and verifying the downloads
This isn't automated?
It is now (see [2]). This shouldn't be a problem.
Austin & Ben probably have more to add to this list
I'm sure they do.
Again, I'd be fine if the answer from the community is "it's just not what we need". But I wanted to see if there were technical/practical/social reasons why this was or wasn't a good idea. If we do think it's a good idea absent those reasons, then we can work on addressing those concerns.
Technically I think there are no reasons why this isn't feasible with some investment. Exactly how much investment depends upon what exactly we want to achieve, * How often do we make these releases? * Which platforms do we support? * How carefully do we version included libraries? If we focus solely on Linux and punt on the library versioning issue I would say this wouldn't even difficult. I could easily setup my build machine to do a nightly bindist and push it to a server somewhere. Austin has also mentioned that Harbormaster builds could potentially produce bindists. The question is whether users want more rapid releases. Those working on GHC will use their own builds. Most users want something reasonably stable (in both the interface sense and the reliability sense) and therefore I suspect would stick with the releases. This leaves a relatively small number of potential users; namely those who want to play around with unreleased features yet aren't willing to do their own builds. Cheers, - Ben [1] https://github.com/bgamari/ghc-utils [2] https://github.com/bgamari/ghc-utils/blob/master/rel-eng/upload.sh

On 2015-09-02 at 12:43:57 +0200, Ben Gamari wrote: [...]
The question is whether users want more rapid releases. Those working on GHC will use their own builds. Most users want something reasonably stable (in both the interface sense and the reliability sense) and therefore I suspect would stick with the releases. This leaves a relatively small number of potential users; namely those who want to play around with unreleased features yet aren't willing to do their own builds.
Btw, for those who are willing to use Ubuntu there's already GHC HEAD builds available in my PPA, and I can easily keep creating GHC 7.10.3 snapshots in the same style like I usually do shortly before a stable point-release.

I think some of my idea was misunderstood here: my goal was to have quick releases only from the stable branch. The goal would not be to release the new and shiny, but instead to get bugfixes out to users quicker. The new and shiny (master) would remain as it is now. In other words: more users would be affected by this change than just the vanguard.
Richard
On Sep 2, 2015, at 3:43 AM, Ben Gamari
Richard Eisenberg
writes: On Sep 1, 2015, at 12:01 AM, Herbert Valerio Riedel
wrote: I'd say mostly organisational overhead which can't be fully automated (afaik, Ben has already automated large parts but not everything can be):
- Coordinating with people creating and testing the bindists
This was the sort of thing I thought could be automated. I'm picturing a system where Austin/Ben hits a button and everything whirs to life, creating, testing, and posting bindists, with no people involved.
I can nearly do this for Linux with my existing tools. I can do 32- and 64-bit builds for both RedHat and Debian all on a single Debian 8 machine with the tools I developed during the course of the 7.10.2 release [1].
Windows is unfortunately still a challenge. I did the 7.10.2 builds on an EC2 instance and the experience wasn't terribly fun. I would love for this to be further automated but I've not done this yet.
- Writing releases notes & announcment
Release notes should, theoretically, be updated with the patches. Announcement can be automated.
If I'm doing my job well the release notes shouldn't be a problem. I've been trying to be meticulous about ensuring that all new features come with acceptable release notes.
- If bundled core-libraries are affected, coordination overhead with package maintainers (unless GHC HQ owned), verifying version bumps (API diff!) and changelogs have been updated accordingly, uploading to Hackage
Any library version change would require a more proper release. Do these libraries tend to change during a major release cycle?
The core libraries are perhaps the trickiest part of this. Currently the process goes something like this,
1. We branch off a stable GHC release 2. Development continues on `master`, eventually a breaking change is merged to one of the libraries 3. Eventually someone notices and bumps the library's version 4. More breaking changes are merged to the library 5. We branch off for another stable release, right before the release we manually push the libraries to Hackage 6. Repeat from (2)
There can potentially be a lot of interface churn between steps 3 and 5. If we did releases in this period we would need to be much more careful about library versioning. I suspect this may end up being quite a bit of work to do properly.
Technically we could punt on this problem and just do the same sort of stable/unstable versioning for the libraries that we already do with GHC itself. This would mean, however, that we couldn't upload the libraries to Hackage.
- Uploading and signing packagees to download.haskell.org, and verifying the downloads
This isn't automated?
It is now (see [2]). This shouldn't be a problem.
Austin & Ben probably have more to add to this list
I'm sure they do.
Again, I'd be fine if the answer from the community is "it's just not what we need". But I wanted to see if there were technical/practical/social reasons why this was or wasn't a good idea. If we do think it's a good idea absent those reasons, then we can work on addressing those concerns.
Technically I think there are no reasons why this isn't feasible with some investment. Exactly how much investment depends upon what exactly we want to achieve,
* How often do we make these releases? * Which platforms do we support? * How carefully do we version included libraries?
If we focus solely on Linux and punt on the library versioning issue I would say this wouldn't even difficult. I could easily setup my build machine to do a nightly bindist and push it to a server somewhere. Austin has also mentioned that Harbormaster builds could potentially produce bindists.
The question is whether users want more rapid releases. Those working on GHC will use their own builds. Most users want something reasonably stable (in both the interface sense and the reliability sense) and therefore I suspect would stick with the releases. This leaves a relatively small number of potential users; namely those who want to play around with unreleased features yet aren't willing to do their own builds.
Cheers,
- Ben
[1] https://github.com/bgamari/ghc-utils [2] https://github.com/bgamari/ghc-utils/blob/master/rel-eng/upload.sh

Richard Eisenberg
I think some of my idea was misunderstood here: my goal was to have quick releases only from the stable branch. The goal would not be to release the new and shiny, but instead to get bugfixes out to users quicker. The new and shiny (master) would remain as it is now. In other words: more users would be affected by this change than just the vanguard.
I see. This is something we could certainly do. It would require, however, that we be more pro-active about continuing to merge things to the stable branch after the release. Currently the stable branch is essentially in the same state that it was in for the 7.10.2 release. I've left it this way as it takes time and care to cherry-pick patches to stable. Thusfar my poilcy has been to perform this work lazily until it's clear that we will do another stable release as otherwise the effort may well be wasted. So, even if the steps of building, testing, and uploading the release are streamlined more frequent releases are still far from free. Whether it's a worthwhile cost I don't know. This is a difficult question to answer without knowing more about how typical users actually acquire GHC. For instance, this effort would have minimal impact on users who get their compiler through their distribution's package manager. On the other hand, if most users download GHC bindists directly from the GHC download page, then perhaps this would be effort well-spent. Cheers, - Ben

I have the impression (no data to back it up, though) that no small number
of users download bindists (because most OS packages are out of date:
Debian Unstable is still on 7.8.4, as is Ubuntu Wily; Arch is on 7.10.1).
On Wed, Sep 2, 2015 at 12:04 PM, Ben Gamari
Richard Eisenberg
writes: I think some of my idea was misunderstood here: my goal was to have quick releases only from the stable branch. The goal would not be to release the new and shiny, but instead to get bugfixes out to users quicker. The new and shiny (master) would remain as it is now. In other words: more users would be affected by this change than just the vanguard.
I see. This is something we could certainly do.
It would require, however, that we be more pro-active about continuing to merge things to the stable branch after the release. Currently the stable branch is essentially in the same state that it was in for the 7.10.2 release. I've left it this way as it takes time and care to cherry-pick patches to stable. Thusfar my poilcy has been to perform this work lazily until it's clear that we will do another stable release as otherwise the effort may well be wasted.
So, even if the steps of building, testing, and uploading the release are streamlined more frequent releases are still far from free. Whether it's a worthwhile cost I don't know.
This is a difficult question to answer without knowing more about how typical users actually acquire GHC. For instance, this effort would have minimal impact on users who get their compiler through their distribution's package manager. On the other hand, if most users download GHC bindists directly from the GHC download page, then perhaps this would be effort well-spent.
Cheers,
- Ben
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

Merging and releasing a fix to the stable branch always carries a cost: it might break something else. There is a real cost to merging, which is why we've followed the lazy strategy that Ben describes.
Still, even given the lazy strategy we could perfectly well put out minor releases more proactively; e.g. fix one bug (or a little batch) and release. Provided we could reduce the per-release costs.
Simon
| -----Original Message-----
| From: ghc-devs [mailto:ghc-devs-bounces@haskell.org] On Behalf Of Ben
| Gamari
| Sent: 02 September 2015 17:05
| To: Richard Eisenberg
| Cc: GHC developers
| Subject: Re: more releases
|
| Richard Eisenberg

On 09/07/2015 04:57 PM, Simon Peyton Jones wrote:
Merging and releasing a fix to the stable branch always carries a cost: it might break something else. There is a real cost to merging, which is why we've followed the lazy strategy that Ben describes.
A valid point, but the upside is that it's a very fast operation to revert if a release is "bad"... and get that updated release into the wild. Regards,

Some people had asked what the users want and about typical usage, so I'll
give the my perspective. I consider myself a pretty typical user of
Haskell: PhD student (in theory, not languages), but still pushing the
boundaries of the compiler. I've filed quite a few bugs, so I have
experience with having to wait for them to get fixed. My code at various
points has been littered with "see ticket #xxx for why I'm jumping through
three hoops to accomplish this". As a result, I would be interested in
getting builds with bugfixes. For example see the discussion on #10428:
https://ghc.haskell.org/trac/ghc/ticket/10428. It's hard for a user to tell
if/when a patch will be merged. I'm using 7.10.1 at the moment, but I was
unsure if the patch for #10428 made it to 7.10.2.
Ben: I download the GHC bindist directly from the GHC page precisely
because the one on the PPA is (inevitably) ancient.
Upgrading GHC (even minor releases; I just tried 7.10.2 to confirm this) is
a pain because I have to spend an hour downloading and re-building all of
the packages I need. However, I'd certainly be willing to do that for bugs
that affect my code. Richard said, "Then a user's package library doesn't
have to be recompiled when updating". If he means that I wouldn't have to
do that, that's fantastic. However, I still wouldn't download every tiny
release due to the 100mb download+install time to fix bugs that don't
affect me (I'd only do that for bugs that *do* affect me).
In short: I'd really like to have builds for every bug (or maybe every
day/week) that I can easily download and install.
On Mon, Sep 7, 2015 at 12:05 PM, Bardur Arantsson
On 09/07/2015 04:57 PM, Simon Peyton Jones wrote:
Merging and releasing a fix to the stable branch always carries a cost: it might break something else. There is a real cost to merging, which is why we've followed the lazy strategy that Ben describes.
A valid point, but the upside is that it's a very fast operation to revert if a release is "bad"... and get that updated release into the wild.
Regards,
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
participants (9)
-
Alex Rozenshteyn
-
Bardur Arantsson
-
Ben Gamari
-
Eric Crockett
-
Herbert Valerio Riedel
-
Michael Snoyman
-
Richard Eisenberg
-
Simon Peyton Jones
-
Stephen Paul Weber