
Dear GHC devops group The conversation on Trac #14558https://ghc.haskell.org/trac/ghc/ticket/14558 suggests that we might want to consider reviewing GHC's release policieshttps://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Releases. This email is to invite your input. The broad questions is this. We want GHC to serve the needs of all its users, including downstream tooling that uses GHC. What release policies will best support that goal? For example, we already ensure that GHC 8.4 can be compiled with 8.2 and 8.0. This imposes a slight tax on GHC development, but it means that users don't need to upgrade quite as often. (If the tempo of releases increases, we might want to increase the window.) Trac #14558 suggests that we might want to ensure the metadata on GHC's built-in libraries is parsable with older Cabals. One possibility would be this: * Ensure that the Cabal metadata of non-reinstallable packages (e.g. integer-gmp) shipped with GHC be parsable by the Cabal versions shipped with the last two major GHC releases [i.e. have a sufficiently old cabal-version field]. That is, in general a new Cabal specification will need to be shipped with two GHC releases before GHC will use start using its features in non-reinstallable packages. * Upholding this policy won't always be possible. There may be cases (as is the case Hadrian for GHC 8.4) where the benefit of quickly introducing incompatible syntax outweighs the need for compatibility. In this (hopefully rare) case we would explicitly advertise the incompatibility in the release documentation, and give as much notice as possible to users to allow downstream tools to adapt. * For reinstallable packages, of which GHC is simply a client (like text or bytestring), we can't reasonably enforce such a policy, because GHC devs have no control over what the maintainers of external core libraries put in their Cabal files. This is just a proposal. The narrow questions are these: * Would this be sufficient to deal with the concerns raised in #14558? * Is it necessary, ow would anything simpler be sufficient? * What costs would the policy impose on GHC development? * There may be matters of detail: e.g. is two releases the right grace period. Would one do? Both the broad question and the narrow ones are appropriate for the Devops group. Thanks! Simon

[replying to ghc-devops-group@, which I assume based on your email's content is the mailing list you intended.] Hi Simon, feedback from downstream consumers of Cabal metadata (e.g. build tool authors) will be particularly useful for the discussion here. Here are my thoughts as a bystander. It's worth trying to identify what problems came up during the integer-gmp incident in Trac #14558: * GHC 8.2.1 shipped with integer-gmp-1.0.1.0 but the release notes said otherwise. * GHC 8.2.1 shipped with Cabal-2.0.0.2, but specifically claimed in the release notes that cabal-install-1.24 (and by implication any other build tool based on Cabal-the-library version 1.24) was supported: "GHC 8.2 only works with cabal-install version 1.24 or later. Please upgrade if you have an older version of cabal-install." * GHC 8.2.2 also claimed Cabal-1.24 support. * GHC 8.2.1 was released in July 2017 with Cabal-2.0.0.2, a brand new major release with breaking changes to the metadata format, without much lead time for downstream tooling authors (like Stack) to adapt. * But actually if we look at their respective release notes, GHC 8.2.1 was relased in July 2017, even though the Cabal website claims that Cabal-2.0.0.2 was released in August 2017 (see https://www.haskell.org/cabal/download.html). So it looks like GHC didn't just not give enough lead time about an upstream dependency it shipped with, it shipped with an unreleased version of Cabal! * Libraries that ship with GHC are usually also uploaded to Hackage, to make the documentation easily accessible, but integer-gmp-1.0.1.0 was not uploaded to Hackage until 4 months after the release. * The metadata for integer-gmp-1.0.1.0 as uploaded to Hackage differed from the metadata that was actually in the source tarball of GHC-8.2.1 and GHC-8.2.2. * The metadata for integer-gmp-1.0.1.0 as uploaded to Hackage included Cabal-2.0 specific syntactic sugar, making the metadata unreadable using any tooling that did not link against the Cabal-2.0.0.2 library (or any later version). * It so happened that one particular version of one particular downstream build tool, Stack, had a bug, compounding the bad effects of the previous point. But a new release has now been made, and in any case that's not a problem for GHC to solve. So let's keep that out of the discussion here. So I suggest we discuss ways to eliminate or reduce the likelihood of any of the above problems from occurring again. Here are some ideas: * GHC should never under any circumstance ship with an unreleased version of any independently maintained dependency. Cabal is one such dependency. This should hold true for anything else. We could just add that policy to the Release Policy. * Stronger still, GHC should not switch to a new major release of a dependency at any time during feature freeze ahead of a release. E.g. if Cabal-3.0.0 ships before feature freeze for GHC-9.6, then maybe it's fair game to include in GHC. But not if Cabal-3.0.0 hasn't shipped yet. * The 3-release backwards compat rule should apply in all circumstances. That means major version bumps of any library GHC ships with, including base, should not imply any breaking change in the API's of any such library. * GHC does have control over reinstallable packages (like text and bytestring): GHC need not ship with the latest versions of these, if indeed they introduce breaking changes that would contravene the 3-release policy. * Note: today, users are effectively tied to whatever version of the packages ships with GHC (i.e. the "reinstallable" bit is problematic today for various technical reasons). That's why a breaking change in bytestring is technically a breaking change in GHC. * The current release policy covers API stability, but what about metadata? In the extreme, we could say a 3-release policy applies to metadata too. Meaning, all metadata shipping with GHC now and in the next 2 releases should be parseable by today's version of Cabal and downstream tooling. Is such a long lead time necessary? That's for build tool authors to say, and a point to negotiate with GHC devs. * Because there are far fewer consumers of metadata than consumers of say base, I think shorter lead time is reasonable. At the other extreme, it could even be just the few months during feature freeze. * The release notes bugs mentioned above and the lack of consistent upload to Hackage are a symptom of lack of release automation, I suspect. That's how to fix it, but we could also spell out in the Release Policy that GHC libraries should all be on Hackage from the day of release. Finally, a question for discussion: * Hackage allows revising the metadata of an uploaded package even without changing the version number. This happens routinely on Hackage today by the Hackage trustees. Should this be permitted for packages whose release is completely tied to that of GHC itself (like integer-gmp)? Best, Mathieu On 13 December 2017 at 17:43, Simon Peyton Jones via ghc-devs < ghc-devs@haskell.org> wrote:
Dear GHC devops group
The conversation on Trac #14558 https://ghc.haskell.org/trac/ghc/ticket/14558 suggests that we might want to consider reviewing GHC’s release policies https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Releases. This email is to invite your input.
The broad questions is this. We want GHC to serve the needs of all its users, including downstream tooling that uses GHC. What release policies will best support that goal? For example, we already ensure that GHC 8.4 can be compiled with 8.2 and 8.0. This imposes a slight tax on GHC development, but it means that users don't need to upgrade quite as often. (If the tempo of releases increases, we might want to increase the window.)
Trac #14558 suggests that we might want to ensure the metadata on GHC’s built-in libraries is parsable with older Cabals. One possibility would be this:
- Ensure that the Cabal metadata of non-reinstallable packages (e.g. integer-gmp) shipped with GHC be parsable by the Cabal versions shipped with the last two major GHC releases [i.e. have a sufficiently old cabal-version field]. That is, in general a new Cabal specification will need to be shipped with two GHC releases before GHC will use start using its features in non-reinstallable packages. - Upholding this policy won't always be possible. There may be cases (as is the case Hadrian for GHC 8.4) where the benefit of quickly introducing incompatible syntax outweighs the need for compatibility. In this (hopefully rare) case we would explicitly advertise the incompatibility in the release documentation, and give as much notice as possible to users to allow downstream tools to adapt. - For reinstallable packages, of which GHC is simply a client (like text or bytestring), we can’t reasonably enforce such a policy, because GHC devs have no control over what the maintainers of external core libraries put in their Cabal files.
This is just a proposal. The narrow questions are these:
- Would this be sufficient to deal with the concerns raised in #14558? - Is it necessary, ow would anything simpler be sufficient? - What costs would the policy impose on GHC development? - There may be matters of detail: e.g. is two releases the right grace period. Would one do?
Both the broad question and the narrow ones are appropriate for the Devops group.
Thanks!
Simon
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

---------- Forwarded message ----------
From: Boespflug, Mathieu
Dear GHC devops group
The conversation on Trac #14558 https://ghc.haskell.org/trac/ghc/ticket/14558 suggests that we might want to consider reviewing GHC’s release policies https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Releases. This email is to invite your input.
The broad questions is this. We want GHC to serve the needs of all its users, including downstream tooling that uses GHC. What release policies will best support that goal? For example, we already ensure that GHC 8.4 can be compiled with 8.2 and 8.0. This imposes a slight tax on GHC development, but it means that users don't need to upgrade quite as often. (If the tempo of releases increases, we might want to increase the window.)
Trac #14558 suggests that we might want to ensure the metadata on GHC’s built-in libraries is parsable with older Cabals. One possibility would be this:
- Ensure that the Cabal metadata of non-reinstallable packages (e.g. integer-gmp) shipped with GHC be parsable by the Cabal versions shipped with the last two major GHC releases [i.e. have a sufficiently old cabal-version field]. That is, in general a new Cabal specification will need to be shipped with two GHC releases before GHC will use start using its features in non-reinstallable packages. - Upholding this policy won't always be possible. There may be cases (as is the case Hadrian for GHC 8.4) where the benefit of quickly introducing incompatible syntax outweighs the need for compatibility. In this (hopefully rare) case we would explicitly advertise the incompatibility in the release documentation, and give as much notice as possible to users to allow downstream tools to adapt. - For reinstallable packages, of which GHC is simply a client (like text or bytestring), we can’t reasonably enforce such a policy, because GHC devs have no control over what the maintainers of external core libraries put in their Cabal files.
This is just a proposal. The narrow questions are these:
- Would this be sufficient to deal with the concerns raised in #14558? - Is it necessary, ow would anything simpler be sufficient? - What costs would the policy impose on GHC development? - There may be matters of detail: e.g. is two releases the right grace period. Would one do?
Both the broad question and the narrow ones are appropriate for the Devops group.
Thanks!
Simon
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

"Boespflug, Mathieu"
---------- Forwarded message ---------- From: Boespflug, Mathieu
Date: 13 December 2017 at 23:03 Subject: Re: Release policies To: Simon Peyton Jones Cc: ghc-devops-group@haskell.org [replying to ghc-devops-group@, which I assume based on your email's content is the mailing list you intended.]
Hi Simon,
feedback from downstream consumers of Cabal metadata (e.g. build tool authors) will be particularly useful for the discussion here. Here are my thoughts as a bystander.
It's worth trying to identify what problems came up during the integer-gmp incident in Trac #14558:
* GHC 8.2.1 shipped with integer-gmp-1.0.1.0 but the release notes said otherwise.
* GHC 8.2.1 shipped with Cabal-2.0.0.2, but specifically claimed in the release notes that cabal-install-1.24 (and by implication any other build tool based on Cabal-the-library version 1.24) was supported: "GHC 8.2 only works with cabal-install version 1.24 or later. Please upgrade if you have an older version of cabal-install."
* GHC 8.2.2 also claimed Cabal-1.24 support.
* GHC 8.2.1 was released in July 2017 with Cabal-2.0.0.2, a brand new major release with breaking changes to the metadata format, without much lead time for downstream tooling authors (like Stack) to adapt.
* But actually if we look at their respective release notes, GHC 8.2.1 was relased in July 2017, even though the Cabal website claims that Cabal-2.0.0.2 was released in August 2017 (see https://www.haskell.org/cabal/download.html). So it looks like GHC didn't just not give enough lead time about an upstream dependency it shipped with, it shipped with an unreleased version of Cabal!
Perhaps this is true and I admit I wasn't happy about releasing the compiler without a Cabal release. However, there was no small amount of pressure to push forward nevertheless as the release was already quite late and the expectation was a Cabal release would be coming shortly after the GHC release. Coordination issues like this are a major reason why I think it would be better if GHC were more decoupled from its dependencies' upstreams. I think the approach that we discussed at ICFP, where library authors must upstream their version bumps before the freeze, just like any library, is perhaps one way forward although I suspect it exceptions will need to be made.
* Libraries that ship with GHC are usually also uploaded to Hackage, to make the documentation easily accessible, but integer-gmp-1.0.1.0 was not uploaded to Hackage until 4 months after the release.
* The metadata for integer-gmp-1.0.1.0 as uploaded to Hackage differed from the metadata that was actually in the source tarball of GHC-8.2.1 and GHC-8.2.2.
* The metadata for integer-gmp-1.0.1.0 as uploaded to Hackage included Cabal-2.0 specific syntactic sugar, making the metadata unreadable using any tooling that did not link against the Cabal-2.0.0.2 library (or any later version).
* It so happened that one particular version of one particular downstream build tool, Stack, had a bug, compounding the bad effects of the previous point. But a new release has now been made, and in any case that's not a problem for GHC to solve. So let's keep that out of the discussion here.
So I suggest we discuss ways to eliminate or reduce the likelihood of any of the above problems from occurring again. Here are some ideas:
* GHC should never under any circumstance ship with an unreleased version of any independently maintained dependency. Cabal is one such dependency. This should hold true for anything else. We could just add that policy to the Release Policy.
We can adopt this as a policy, but doing so very well may mean that GHC will be subject to schedule slips beyond its control. We can hope that upstream maintainers will be responsive, but there is little we can do when they are not. Of course, if we adopt the policy of disallowing all but essentially core library bumps during the freeze then we may be able to mitigate this.
* Stronger still, GHC should not switch to a new major release of a dependency at any time during feature freeze ahead of a release. E.g. if Cabal-3.0.0 ships before feature freeze for GHC-9.6, then maybe it's fair game to include in GHC. But not if Cabal-3.0.0 hasn't shipped yet.
Yes, this I agree with. I think we can be more accomodating of minor bumps to fix bugs which may come to light during the freeze, but major releases should be avoided.
* The 3-release backwards compat rule should apply in all circumstances. That means major version bumps of any library GHC ships with, including base, should not imply any breaking change in the API's of any such library.
I'm not sure I follow what you are suggesting here.
* GHC does have control over reinstallable packages (like text and bytestring): GHC need not ship with the latest versions of these, if indeed they introduce breaking changes that would contravene the 3-release policy.
* Note: today, users are effectively tied to whatever version of the packages ships with GHC (i.e. the "reinstallable" bit is problematic today for various technical reasons). That's why a breaking change in bytestring is technically a breaking change in GHC.
I don't follow: Only a small fraction of packages, namely those that explicitly link against the `ghc` library, are tied. Can you clarify what technical reasons you are referring to here?
* The current release policy covers API stability, but what about metadata? In the extreme, we could say a 3-release policy applies to metadata too. Meaning, all metadata shipping with GHC now and in the next 2 releases should be parseable by today's version of Cabal and downstream tooling. Is such a long lead time necessary? That's for build tool authors to say, and a point to negotiate with GHC devs.
* Because there are far fewer consumers of metadata than consumers of say base, I think shorter lead time is reasonable. At the other extreme, it could even be just the few months during feature freeze.
Right, I wouldn't be opposed to striving for this in principle although I think we should be aware that breakage is at times necessary and the policy should accomodate this. I think the important thing is that we be aware of when we are breaking metadata compatibility and convey this to our users.
* The release notes bugs mentioned above and the lack of consistent upload to Hackage are a symptom of lack of release automation, I suspect. That's how to fix it, but we could also spell out in the Release Policy that GHC libraries should all be on Hackage from the day of release.
Yes, the hackage uploads have historically been handled manually. I have and AFAIK most release managers coming before me have generally deferred this to Herbert as is quite meticulous. However, I think it would be nice if we could remove the need for human intervention entirely.
Finally, a question for discussion:
* Hackage allows revising the metadata of an uploaded package even without changing the version number. This happens routinely on Hackage today by the Hackage trustees. Should this be permitted for packages whose release is completely tied to that of GHC itself (like integer-gmp)?
If there is a bug in the metadata then I don't think we should necessarily tie our hands from fixing it. However, we should clearly take great care when making such changes to avoid further breakage. Cheers, - Ben

* But actually if we look at their respective release notes, GHC 8.2.1 was relased in July 2017, even though the Cabal website claims that Cabal-2.0.0.2 was released in August 2017 (see https://www.haskell.org/cabal/download.html). So it looks like GHC didn't just not give enough lead time about an upstream dependency it shipped with, it shipped with an unreleased version of Cabal!
Perhaps this is true and I admit I wasn't happy about releasing the compiler without a Cabal release. However, there was no small amount of pressure to push forward nevertheless as the release was already quite late and the expectation was a Cabal release would be coming shortly after the GHC release. Coordination issues like this are a major reason why I think it would be better if GHC were more decoupled from its dependencies' upstreams.
I have the same sentiment. Do you think this is feasible in the case of Cabal? Even if say something like Backpack shows up all over again? If so, are there concrete changes that could be made to support the following workflow: * upstreams develop their respective libraries independently of GHC using their own testing. * If they want GHC to ship a newer version, they create a Diff. As Manuel proposed in a separate thread, this must be before feature freeze, unless... * ... a critical issue is found in the upstream release, in which case upstream cuts a new release, and submits a Diff again. * GHC always has the option to back out an offending upgrade, and revert to a known good version. In fact it should preemptively do so while waiting for a new release of upstream. * In general, GHC does not track git commits of upstream dependencies in an unknown state of quality, but tracks vetted and tested releases instead.
* GHC should never under any circumstance ship with an unreleased version of any independently maintained dependency. Cabal is one such dependency. This should hold true for anything else. We could just add that policy to the Release Policy.
We can adopt this as a policy, but doing so very well may mean that GHC will be subject to schedule slips beyond its control. We can hope that upstream maintainers will be responsive, but there is little we can do when they are not.
Why not? If GHC only ever tracks upstream releases (as I think it should), not git commits in unknown state, then we don't need upstream maintainer responsiveness. Because at any point in time, all GHC dependencies are already released. If GHC should ship with a newer version of a dependency, the onus is on the upstream maintainer to submit a Diff asking GHC to move to the latest version. Are there good reasons for GHC to track patches not upstreamed and released?
* Stronger still, GHC should not switch to a new major release of a dependency at any time during feature freeze ahead of a release. E.g. if Cabal-3.0.0 ships before feature freeze for GHC-9.6, then maybe it's fair game to include in GHC. But not if Cabal-3.0.0 hasn't shipped yet.
Yes, this I agree with. I think we can be more accomodating of minor bumps to fix bugs which may come to light during the freeze, but major releases should be avoided.
Agreed.
* The 3-release backwards compat rule should apply in all circumstances. That means major version bumps of any library GHC ships with, including base, should not imply any breaking change in the API's of any such library.
I'm not sure I follow what you are suggesting here.
Nothing new: just that the 3-release policy doesn't just apply to base, but also anything else that happens to ship with GHC (including Cabal). Perhaps that already the policy?
* GHC does have control over reinstallable packages (like text and bytestring): GHC need not ship with the latest versions of these, if indeed they introduce breaking changes that would contravene the 3-release policy.
* Note: today, users are effectively tied to whatever version of the packages ships with GHC (i.e. the "reinstallable" bit is problematic today for various technical reasons). That's why a breaking change in bytestring is technically a breaking change in GHC.
I don't follow: Only a small fraction of packages, namely those that explicitly link against the `ghc` library, are tied. Can you clarify what technical reasons you are referring to here?
Builds often fail for strange reasons when both bytestring-0.10.2 and bytestring-0.10.1 are in scope. Some libraries in a build plan pick up one version where some pick up another. The situation here might well be better than it used to be, but at this point in time Stackage works hard to ensure that in any given package set, there is *exactly one* version of any package. That's why Stackage aligns versions of core packages to whatever ships with the GHC version the package set is based on. So in this sense, AFAIK a bug in bytestring can't be worked around by reinstalling bytestring (not in Stackage land): it requires waiting for the next GHC version that will ship with a new version of bytestring with that bug fixed. I'm not entirely familiar with all Stackage details so Michael - please step in if this is incorrect.
* Because there are far fewer consumers of metadata than consumers of say base, I think shorter lead time is reasonable. At the other extreme, it could even be just the few months during feature freeze.
Right, I wouldn't be opposed to striving for this in principle although I think we should be aware that breakage is at times necessary and the policy should accomodate this. I think the important thing is that we be aware of when we are breaking metadata compatibility and convey this to our users.
That sounds reasonable. But when have we ever needed to use non backwards compatible metadata ASAP? The integer-gmp example was a case in point: the Cabal-2.0 feature it was using was merely syntactic sugar at this point, since no tool *yet* interprets the new constructs in any special way AFAIK.
* The release notes bugs mentioned above and the lack of consistent upload to Hackage are a symptom of lack of release automation, I suspect. That's how to fix it, but we could also spell out in the Release Policy that GHC libraries should all be on Hackage from the day of release.
Yes, the hackage uploads have historically been handled manually. I have and AFAIK most release managers coming before me have generally deferred this to Herbert as is quite meticulous. However, I think it would be nice if we could remove the need for human intervention entirely.
Indeed. Can be part of the deploy step in the continuous integration pipeline.

| at this point in time Stackage works
| hard to ensure that in any given package set, there is *exactly one*
| version of any package. That's why Stackage aligns versions of core
| packages to whatever ships with the GHC version the package set is
| based on.
Ah. It follows that if Stackage wants to find a set of packages compatible with GHC-X, then it must pick precisely the version of bytestring that GHC-X depends on. (I'm assuming here that GHC-X fixes a particular version, even though bytestring is reinstallable? Certainly, a /distribution/ of GHC-X will do so.)
If meanwhile the bytestring author has decided to use a newer version of .cabal file syntax, then GHC-X is stuck with that. Or would have to go back to an earlier version of bytestring, for which there might be material disadvantages.
That would make it hard to GHC to guarantee to downstream tools that it doesn't depend on any packages whose .cabal files use new syntax; which is where this thread started.
Hmm. I wonder if I have understood this correctly. Perhaps Michael would like to comment?
Simon
| -----Original Message-----
| From: ghc-devs [mailto:ghc-devs-bounces@haskell.org] On Behalf Of
| Boespflug, Mathieu
| Sent: 14 December 2017 22:00
| To: Ben Gamari

On Fri, Dec 15, 2017 at 12:10 PM, Simon Peyton Jones via ghc-devs < ghc-devs@haskell.org> wrote:
| at this point in time Stackage works | hard to ensure that in any given package set, there is *exactly one* | version of any package. That's why Stackage aligns versions of core | packages to whatever ships with the GHC version the package set is | based on.
Ah. It follows that if Stackage wants to find a set of packages compatible with GHC-X, then it must pick precisely the version of bytestring that GHC-X depends on. (I'm assuming here that GHC-X fixes a particular version, even though bytestring is reinstallable? Certainly, a /distribution/ of GHC-X will do so.)
If meanwhile the bytestring author has decided to use a newer version of .cabal file syntax, then GHC-X is stuck with that. Or would have to go back to an earlier version of bytestring, for which there might be material disadvantages.
That would make it hard to GHC to guarantee to downstream tools that it doesn't depend on any packages whose .cabal files use new syntax; which is where this thread started.
Hmm. I wonder if I have understood this correctly. Perhaps Michael would like to comment?
Stackage does in fact pin snapshots down to precisely one version of each package. And in the case of non-reinstallable packages, it ensures that those package's transitive dependency set are pinned to the same version that ships with GHC. I know there's work around making more package reinstallable, and the ghc package itself may have crossed that line now, but for the moment Stackage assumes that the ghc package and all its dependencies are non-reinstallable. Therefore bytestring, process, containers, transformers, and many more will be pinned in a Stackage snapshot. Michael

Therefore bytestring, process, containers, transformers, and many more will be pinned in a Stackage snapshot.
So that would make it significantly harder, even impossible, for GHC releases to make any promises about the .cabal-file format of these packages, wouldn’t it?
So even if we made some back-compat promise for non-reinstallable things like integer-gmp or base, we could not do so for bytestring.
Does that give you cause for concern? After all, it’s where Trac #14558 started. I don’t see how we can avoid the original problem, since we don’t have control over the .cabal file format used by the authors of the packages on which we depend.
Still: GHC can only depend on a package P if the version X of Cabal that GHC is using can parse P.cabal. So if we fix Cabal-X some while in advance and announce that, perhaps that would serve the purpose?
Simon
From: Michael Snoyman [mailto:michael@snoyman.com]
Sent: 15 December 2017 09:27
To: Simon Peyton Jones

On Fri, Dec 15, 2017, 6:23 PM Simon Peyton Jones
Therefore bytestring, process, containers, transformers, and many more will be pinned in a Stackage snapshot.
So that would make it significantly harder, even impossible, for GHC releases to make any promises about the .cabal-file format of these packages, wouldn’t it?
So even if we made some back-compat promise for non-reinstallable things like integer-gmp or base, we could not do so for bytestring.
Does that give you cause for concern? After all, it’s where Trac #14558 started. I don’t see how we can avoid the original problem, since we don’t have control over the .cabal file format used by the authors of the packages on which we depend.
Still: GHC can only depend on a package P if the version X of Cabal that GHC is using can parse P.cabal. So if we fix Cabal-X some while in advance and announce that, perhaps that would serve the purpose?
Simon
That will certainly help. Even if GHC can't force any behavior on upstream packages, perhaps just an official request that new features in the cabal file format be held off on would be sufficient. After all, the case in the Trac issue was a situation where the new cabal feature want necessary. I would imagine that in the vast majority of cases, maintaining backwards compatibility in these packages will not only be desirable, but relatively trivial. *From:* Michael Snoyman [mailto:michael@snoyman.com]
*Sent:* 15 December 2017 09:27
*To:* Simon Peyton Jones
*Cc:* Boespflug, Mathieu
; Ben Gamari ; ghc-devs *Subject:* Re: Fwd: Release policies
On Fri, Dec 15, 2017 at 12:10 PM, Simon Peyton Jones via ghc-devs < ghc-devs@haskell.org> wrote:
| at this point in time Stackage works | hard to ensure that in any given package set, there is *exactly one* | version of any package. That's why Stackage aligns versions of core | packages to whatever ships with the GHC version the package set is | based on.
Ah. It follows that if Stackage wants to find a set of packages compatible with GHC-X, then it must pick precisely the version of bytestring that GHC-X depends on. (I'm assuming here that GHC-X fixes a particular version, even though bytestring is reinstallable? Certainly, a /distribution/ of GHC-X will do so.)
If meanwhile the bytestring author has decided to use a newer version of .cabal file syntax, then GHC-X is stuck with that. Or would have to go back to an earlier version of bytestring, for which there might be material disadvantages.
That would make it hard to GHC to guarantee to downstream tools that it doesn't depend on any packages whose .cabal files use new syntax; which is where this thread started.
Hmm. I wonder if I have understood this correctly. Perhaps Michael would like to comment?
Stackage does in fact pin snapshots down to precisely one version of each package. And in the case of non-reinstallable packages, it ensures that those package's transitive dependency set are pinned to the same version that ships with GHC. I know there's work around making more package reinstallable, and the ghc package itself may have crossed that line now, but for the moment Stackage assumes that the ghc package and all its dependencies are non-reinstallable. Therefore bytestring, process, containers, transformers, and many more will be pinned in a Stackage snapshot.
Michael
participants (4)
-
Ben Gamari
-
Boespflug, Mathieu
-
Michael Snoyman
-
Simon Peyton Jones