
I would like to propose the following changes to the PVP. These are the same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that blog post. 1. The goal of the PVP needs to be clarified. Its purpose is not to ensure reproducible builds of non-published software, but rather to provide for more reliable builds of libraries on Hackage. Reproducible builds should be handled exclusively through version freezing, the only known technique to actually give the necessary guarantees. 2. Upper bounds should not be included on non-upgradeable packages, such as base and template-haskell (are there others?). Alternatively, we should establish some accepted upper bound on these packages, e.g. many people place base < 5 on their code. 3. We should be distinguishing between mostly-stable packages and unstable packages. For a package like text, if you simply import Data.Text (Text, pack, reverse), or some other sane subset, there's no need for upper bounds. (Note that this doesn't provide a hard-and-fast rule like the current PVP, but is rather a matter of discretion. Communication between library authors and users (via documentation or other means) would be vital to making this work well.) 4. For a package version A.B.C, a bump in A or B indicates some level of breaking change. As an opt-in approach, package authors are free to associated meaning to A and B beyond what the PVP requires. Libraries which use these packages are free to rely on the guarantees provided by package authors when placing upper bounds. (Note that this is very related to point (3).) Discussion period: 3 weeks. [1] http://www.yesodweb.com/blog/2014/04/proposal-changes-pvp

-1, obviously :)
On Wed, Apr 9, 2014 at 10:47 AM, Michael Snoyman
I would like to propose the following changes to the PVP. These are the same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that blog post.
1. The goal of the PVP needs to be clarified. Its purpose is not to ensure reproducible builds of non-published software, but rather to provide for more reliable builds of libraries on Hackage. Reproducible builds should be handled exclusively through version freezing, the only known technique to actually give the necessary guarantees.
2. Upper bounds should not be included on non-upgradeable packages, such as base and template-haskell (are there others?). Alternatively, we should establish some accepted upper bound on these packages, e.g. many people place base < 5 on their code.
3. We should be distinguishing between mostly-stable packages and unstable packages. For a package like text, if you simply import Data.Text (Text, pack, reverse), or some other sane subset, there's no need for upper bounds. (Note that this doesn't provide a hard-and-fast rule like the current PVP, but is rather a matter of discretion. Communication between library authors and users (via documentation or other means) would be vital to making this work well.)
4. For a package version A.B.C, a bump in A or B indicates some level of breaking change. As an opt-in approach, package authors are free to associated meaning to A and B beyond what the PVP requires. Libraries which use these packages are free to rely on the guarantees provided by package authors when placing upper bounds. (Note that this is very related to point (3).)
Discussion period: 3 weeks.
[1] http://www.yesodweb.com/blog/2014/04/proposal-changes-pvp
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
--
Gregory Collins

Am 09.04.2014 10:47, schrieb Michael Snoyman:
I would like to propose the following changes to the PVP. These are the same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that blog post.
I see no need to complicate the PVP with distinctions about stable and unstable, published and unpublished packages. My preferred solution is to declare lower and upper version bounds in Build-Depends according to current PVP and ignore these bounds on demand via options that are passed to cabal-install.

(1) I would never say no to making something more clear. The goal is currently given in the first few paragraphs of the PVP. Please give the actual text you'd like to see. The current goal can be summarized as: give a meaning to version numbers. The reason for that goal was problems with unbuildable packages that we experienced prior to the PVP. I don't see how the PVP as written anywhere suggests that the PVP aspires to provide reproducible builds. It only talks about compatible APIs. (2) I'm somewhat ambivalent about removing the upper bounds for the 3 packages that are not upgradable (base, template-haskell, and ghc-prim). On one hand removing upper bounds doesn't introduce additional breakages and avoids having maintainers bump bounds on each GHC release* (~yearly), but on the other hand it creates worse error messages**. The solver could tell the user "Your package doesn't work with the version of base you have installed." instead of giving them a potentially confusing compilation error. * I find it quite disconcerting that we bump the major version of base on every release. We shouldn't make breaking changes to the most core of core libraries on every release! I also note that GHC 7.4.1 seems to have bumped the major version on base even though no breaking change was made: https://www.haskell.org/ghc/docs/7.4.1/html/users_guide/release-7-4-1.html ** Although Cabal's dependency solver doesn't give the best messages today either. But at least it could be improved. (3) This is already the case. We just don't encourage authors to do it (as maintaining version information in documentation rather than machine-checkable contracts tends to be hard to maintain.) -- Johan

On 04/09/2014 11:31 AM, Johan Tibell wrote:
On one hand removing upper bounds doesn't introduce additional breakages and avoids having maintainers bump bounds on each GHC release* (~yearly), but on the other hand it creates worse error messages**.
They might be scarier, but I think they are better. A clueless end user will at least know which package won't build for him, and can alert the appropriate maintainer.

On Wed, Apr 9, 2014 at 12:31 PM, Johan Tibell
(1) I would never say no to making something more clear. The goal is currently given in the first few paragraphs of the PVP. Please give the actual text you'd like to see.
The current goal can be summarized as: give a meaning to version numbers. The reason for that goal was problems with unbuildable packages that we experienced prior to the PVP.
I don't see how the PVP as written anywhere suggests that the PVP aspires to provide reproducible builds. It only talks about compatible APIs.
Nonetheless, there is definitely confusion. The easiest way to see that is to look at the Reddit discussion of the blog post[1]. For example:
Which implicitly includes supporting reproducible builds for "non-published software"
There are other examples in that discussion, as well as in the libraries@discussion. My proposed addition to the PVP itself would be the text: While PVP compliance makes getting a successful build more likely, it does not try to encourage reproducible builds: builds which use exactly the same dependencies as previous builds. In particular: minor version bumps and changes in transitive dependencies can easily slip in. Reproducible builds are highly recommended for building production executables, and for that, dependency freezing is the only known solution (to be included in cabal-install after version X). [1] http://www.reddit.com/r/haskell/comments/22jlis/proposal_changes_to_the_pvp/
(2) I'm somewhat ambivalent about removing the upper bounds for the 3 packages that are not upgradable (base, template-haskell, and ghc-prim). On one hand removing upper bounds doesn't introduce additional breakages and avoids having maintainers bump bounds on each GHC release* (~yearly), but on the other hand it creates worse error messages**. The solver could tell the user "Your package doesn't work with the version of base you have installed." instead of giving them a potentially confusing compilation error.
* I find it quite disconcerting that we bump the major version of base on every release. We shouldn't make breaking changes to the most core of core libraries on every release! I also note that GHC 7.4.1 seems to have bumped the major version on base even though no breaking change was made: https://www.haskell.org/ghc/docs/7.4.1/html/users_guide/release-7-4-1.html
Though it's really a separate discussion, +1 on the idea of decreasing frequency of major version bumps in base and template-haskell.
** Although Cabal's dependency solver doesn't give the best messages today either. But at least it could be improved.
(3) This is already the case. We just don't encourage authors to do it (as maintaining version information in documentation rather than machine-checkable contracts tends to be hard to maintain.)
Yet in this same thread Erik said:
This sounds too vague to be an actual policy, so -1.
So it seems like the intention of the PVP itself is unclear at this point. Michael

On Wed, Apr 9, 2014 at 12:23 PM, Michael Snoyman
Nonetheless, there is definitely confusion. The easiest way to see that is to look at the Reddit discussion of the blog post[1]. For example:
Which implicitly includes supporting reproducible builds for "non-published software"
There are other examples in that discussion, as well as in the libraries@discussion.
I think people were confused by your use of the word "reproducible", some take it to mean "if this package built before it will still build" (the PVP aims at this) and others to mean "build exactly the same bits as before". The PVP and people's interpretation of it doesn't seem to be confused, as seen by reading the rest of the comment you quoted. Put in other words, I don't think anyone believes the PVP is about freezing dependencies, as it's about the very opposite of that, namely allowing ranges of versions.
My proposed addition to the PVP itself would be the text:
While PVP compliance makes getting a successful build more likely, it does not try to encourage reproducible builds: builds which use exactly the same dependencies as previous builds. In particular: minor version bumps and changes in transitive dependencies can easily slip in. Reproducible builds are highly recommended for building production executables, and for that, dependency freezing is the only known solution (to be included in cabal-install after version X).
If we add it it should be as a footnote at the bottom. Bringing up this totally orthogonal issue is likely to confuse people more, not less. Saying that the PVP makes builds more "likely" is understating the guarantee given quite a bit. With the exception of the issue with module and instance re-exports that has been discussed elsewhere and is mentioned on the PVP page, the PVP *guarantees* that things will build, if they built before.
** Although Cabal's dependency solver doesn't give the best messages
today either. But at least it could be improved.
(3) This is already the case. We just don't encourage authors to do it (as maintaining version information in documentation rather than machine-checkable contracts tends to be hard to maintain.)
Yet in this same thread Erik said:
This sounds too vague to be an actual policy, so -1.
So it seems like the intention of the PVP itself is unclear at this point.
Quite intentionally so. We definitely not *want* to encourage people to add extra, non-checkable, ad-hoc policies on top of the PVP, we merely allow for them to do so. I noted that even though it's allowed not a single package I've seen does provide extra guarantees. -- Johan

On Wed, Apr 9, 2014 at 2:40 PM, Johan Tibell
On Wed, Apr 9, 2014 at 12:23 PM, Michael Snoyman
wrote: Nonetheless, there is definitely confusion. The easiest way to see that is to look at the Reddit discussion of the blog post[1]. For example:
Which implicitly includes supporting reproducible builds for "non-published software"
There are other examples in that discussion, as well as in the libraries@discussion.
I think people were confused by your use of the word "reproducible", some take it to mean "if this package built before it will still build" (the PVP aims at this) and others to mean "build exactly the same bits as before". The PVP and people's interpretation of it doesn't seem to be confused, as seen by reading the rest of the comment you quoted. Put in other words, I don't think anyone believes the PVP is about freezing dependencies, as it's about the very opposite of that, namely allowing ranges of versions.
My proposed addition to the PVP itself would be the text:
While PVP compliance makes getting a successful build more likely, it does not try to encourage reproducible builds: builds which use exactly the same dependencies as previous builds. In particular: minor version bumps and changes in transitive dependencies can easily slip in. Reproducible builds are highly recommended for building production executables, and for that, dependency freezing is the only known solution (to be included in cabal-install after version X).
If we add it it should be as a footnote at the bottom. Bringing up this totally orthogonal issue is likely to confuse people more, not less.
Saying that the PVP makes builds more "likely" is understating the guarantee given quite a bit. With the exception of the issue with module and instance re-exports that has been discussed elsewhere and is mentioned on the PVP page, the PVP *guarantees* that things will build, if they built before.
Maybe I'm more sensitive to this because I've had PVP advocates claim I broke their local builds by violating the PVP, when in fact it was specifically the exception that you mention that was the problem. That's why I both want to make it clear that the PVP doesn't guarantee reproducible builds, and why I don't put huge stock in the PVP guaranteeing that builds will always succeed. I've identified one set of holes in the PVP: typeclass instances and reexports leaking from transitive dependencies. Another hole is depending on packages which don't follow the PVP. Another hole is the fact that it's quite easy to make mistakes in trying to follow the PVP (I've been bitten by that multiple times in the past). And it's entirely possible that other holes exist today. However, all of these problems are solved completely by version freezing. I think we need to be honest about the PVP: it does a good job of ensuring builds succeed most of the time, but it does not solve all problems. Should should be encouraging users to be using more reliable solutions. I'm worried that people are falsely relying on the PVP as a panacea for build problems. So having specified all of that, maybe the wording I'm really hoping to add to the PVP is: While the PVP addresses many causes of build failures, there are still cases it does not address(footnote to the list I just provided). These problems can be solved by version freezing, which is available in cabal-install since version X. It is highly recommended that users not rely exclusively on the PVP for the application builds, but instead use version freezing.
** Although Cabal's dependency solver doesn't give the best messages
today either. But at least it could be improved.
(3) This is already the case. We just don't encourage authors to do it (as maintaining version information in documentation rather than machine-checkable contracts tends to be hard to maintain.)
Yet in this same thread Erik said:
This sounds too vague to be an actual policy, so -1.
So it seems like the intention of the PVP itself is unclear at this point.
Quite intentionally so. We definitely not *want* to encourage people to add extra, non-checkable, ad-hoc policies on top of the PVP, we merely allow for them to do so. I noted that even though it's allowed not a single package I've seen does provide extra guarantees.
That's a chicken-and-egg scenario. No one knows they're allowed to do this, so no one does it, therefore we don't need to let anyone know they can do it. And to be clear, I'm arguing that *yes*, we want to encourage people to define stable subsets of their API. Even better would be completely stable APIs, but I don't think that's a reasonable thing to expect in the immediate future. Michael

On Wed, Apr 9, 2014 at 2:15 PM, Michael Snoyman
However, all of these problems are solved completely by version freezing. I think we need to be honest about the PVP: it does a good job of ensuring builds succeed most of the time, but it does not solve all problems. Should should be encouraging users to be using more reliable solutions. I'm worried that people are falsely relying on the PVP as a panacea for build problems.
I think you're placing too much faith in version freezing. There should still be a good story about upgrading the dependencies of a 'frozen' package. If this will result in lots of build failures because of missing upper bounds (don't follow the PVP since we have freezing) then this will be costly. So I think even in the presence of freezing (which we don't have yet) I think the PVP is still important. Also, I don't think we're going to freeze libraries on hackage, so just installing a package to play with it will also still involve an unfrozen build. Erik

I'm confused, Michael, have you documented your not PVP convention
anywhere?
On Wednesday, April 9, 2014, Michael Snoyman
On Wed, Apr 9, 2014 at 2:40 PM, Johan Tibell
javascript:_e(%7B%7D,'cvml','johan.tibell@gmail.com'); wrote:
On Wed, Apr 9, 2014 at 12:23 PM, Michael Snoyman
javascript:_e(%7B%7D,'cvml','michael@snoyman.com'); wrote:
Nonetheless, there is definitely confusion. The easiest way to see that is to look at the Reddit discussion of the blog post[1]. For example:
Which implicitly includes supporting reproducible builds for "non-published software"
There are other examples in that discussion, as well as in the libraries@discussion.
I think people were confused by your use of the word "reproducible", some take it to mean "if this package built before it will still build" (the PVP aims at this) and others to mean "build exactly the same bits as before". The PVP and people's interpretation of it doesn't seem to be confused, as seen by reading the rest of the comment you quoted. Put in other words, I don't think anyone believes the PVP is about freezing dependencies, as it's about the very opposite of that, namely allowing ranges of versions.
My proposed addition to the PVP itself would be the text:
While PVP compliance makes getting a successful build more likely, it does not try to encourage reproducible builds: builds which use exactly the same dependencies as previous builds. In particular: minor version bumps and changes in transitive dependencies can easily slip in. Reproducible builds are highly recommended for building production executables, and for that, dependency freezing is the only known solution (to be included in cabal-install after version X).
If we add it it should be as a footnote at the bottom. Bringing up this totally orthogonal issue is likely to confuse people more, not less.
Saying that the PVP makes builds more "likely" is understating the guarantee given quite a bit. With the exception of the issue with module and instance re-exports that has been discussed elsewhere and is mentioned on the PVP page, the PVP *guarantees* that things will build, if they built before.
Maybe I'm more sensitive to this because I've had PVP advocates claim I broke their local builds by violating the PVP, when in fact it was specifically the exception that you mention that was the problem. That's why I both want to make it clear that the PVP doesn't guarantee reproducible builds, and why I don't put huge stock in the PVP guaranteeing that builds will always succeed. I've identified one set of holes in the PVP: typeclass instances and reexports leaking from transitive dependencies. Another hole is depending on packages which don't follow the PVP. Another hole is the fact that it's quite easy to make mistakes in trying to follow the PVP (I've been bitten by that multiple times in the past). And it's entirely possible that other holes exist today.
However, all of these problems are solved completely by version freezing. I think we need to be honest about the PVP: it does a good job of ensuring builds succeed most of the time, but it does not solve all problems. Should should be encouraging users to be using more reliable solutions. I'm worried that people are falsely relying on the PVP as a panacea for build problems.
So having specified all of that, maybe the wording I'm really hoping to add to the PVP is:
While the PVP addresses many causes of build failures, there are still cases it does not address(footnote to the list I just provided). These problems can be solved by version freezing, which is available in cabal-install since version X. It is highly recommended that users not rely exclusively on the PVP for the application builds, but instead use version freezing.
** Although Cabal's dependency solver doesn't give the best messages
today either. But at least it could be improved.
(3) This is already the case. We just don't encourage authors to do it (as maintaining version information in documentation rather than machine-checkable contracts tends to be hard to maintain.)
Yet in this same thread Erik said:
This sounds too vague to be an actual policy, so -1.
So it seems like the intention of the PVP itself is unclear at this point.
Quite intentionally so. We definitely not *want* to encourage people to add extra, non-checkable, ad-hoc policies on top of the PVP, we merely allow for them to do so. I noted that even though it's allowed not a single package I've seen does provide extra guarantees.
That's a chicken-and-egg scenario. No one knows they're allowed to do this, so no one does it, therefore we don't need to let anyone know they can do it.
And to be clear, I'm arguing that *yes*, we want to encourage people to define stable subsets of their API. Even better would be completely stable APIs, but I don't think that's a reasonable thing to expect in the immediate future.
Michael

On Wed, Apr 9, 2014 at 5:19 PM, Carter Schonwald wrote: I'm confused, Michael, have you documented your not PVP convention
anywhere? I'm not sure what you're asking. The incident in question had nothing to do
with PVP compliance or lack thereof. The proposal on the table is pretty
close to what I use in practice right now. If this proposal is approved, I
would likely start adding in upper bounds in accord with what the new
standards would be. Right now, in cases of mostly-stable packages like text
and bytestring, I've simply left off the upper bounds entirely.
Michael

One thing to keep in mind is that version freezing is only for application
builders, that information probably needs to be included in information
that educates about PVP vs. reproducible build.
Linking to the reddit discussion, a comment that had the most upvotes was
completely confused on reproducible builds, here is the commenter's
explanation of what a reproducible build is:
You're probably right; Simply stated, I considered "reproducible build" to
mean that if there was a package on Hackage that I could cabal install
foobar with a given GHC version at some point in time, I would be able to
do that for each later point in time (e.g. 1 year later) using the very
same GHC version (at least). Isn't that was the PVP was created to
accomplish in the first place?
http://www.reddit.com/r/haskell/comments/22jlis/proposal_changes_to_the_pvp/...
On Wed, Apr 9, 2014 at 7:48 AM, Michael Snoyman
On Wed, Apr 9, 2014 at 5:19 PM, Carter Schonwald < carter.schonwald@gmail.com> wrote:
I'm confused, Michael, have you documented your not PVP convention anywhere?
I'm not sure what you're asking. The incident in question had nothing to do with PVP compliance or lack thereof. The proposal on the table is pretty close to what I use in practice right now. If this proposal is approved, I would likely start adding in upper bounds in accord with what the new standards would be. Right now, in cases of mostly-stable packages like text and bytestring, I've simply left off the upper bounds entirely.
Michael
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Wed, Apr 9, 2014 at 5:35 PM, Greg Weber
One thing to keep in mind is that version freezing is only for application builders, that information probably needs to be included in information that educates about PVP vs. reproducible build.
Linking to the reddit discussion, a comment that had the most upvotes was completely confused on reproducible builds, here is the commenter's explanation of what a reproducible build is:
You're probably right; Simply stated, I considered "reproducible build" to mean that if there was a package on Hackage that I could cabal install foobar with a given GHC version at some point in time, I would be able to do that for each later point in time (e.g. 1 year later) using the very same GHC version (at least). Isn't that was the PVP was created to accomplish in the first place?
http://www.reddit.com/r/haskell/comments/22jlis/proposal_changes_to_the_pvp/...
He/she doesn't seem to be confused. He/she is just using a different meaning of the term "reproducible" than you and me i.e. meaning "if it built before it should still build". Such confusions are easily resolved by defining the terms one use in advance.

On Wed, Apr 9, 2014 at 9:05 AM, Johan Tibell
On Wed, Apr 9, 2014 at 5:35 PM, Greg Weber
wrote: One thing to keep in mind is that version freezing is only for application builders, that information probably needs to be included in information that educates about PVP vs. reproducible build.
Linking to the reddit discussion, a comment that had the most upvotes was completely confused on reproducible builds, here is the commenter's explanation of what a reproducible build is:
You're probably right; Simply stated, I considered "reproducible build" to mean that if there was a package on Hackage that I could cabal install foobar with a given GHC version at some point in time, I would be able to do that for each later point in time (e.g. 1 year later) using the very same GHC version (at least). Isn't that was the PVP was created to accomplish in the first place?
http://www.reddit.com/r/haskell/comments/22jlis/proposal_changes_to_the_pvp/...
He/she doesn't seem to be confused. He/she is just using a different meaning of the term "reproducible" than you and me i.e. meaning "if it built before it should still build". Such confusions are easily resolved by defining the terms one use in advance.
Sure, but the commenter did define their exact meaning of reproducible and stated they think that is what the PVP is for. This appears to be the definition of being confused about what the PVP is meant for. There isn't any situation in which the train of thought expressed here is useful in practice, particularly since we know that the PVP does not actually guarantee a package will install. The useful train of though is to distinguish between applications and libraries and state how dependency freezing is necessary for applications.

On Wed, Apr 9, 2014 at 6:28 PM, Greg Weber
Sure, but the commenter did define their exact meaning of reproducible and stated they think that is what the PVP is for. This appears to be the definition of being confused about what the PVP is meant for.
Here's what the commenter said: You're probably right; Simply stated, I considered "reproducible build" to
mean that if there was a package on Hackage that I could cabal install foobar with a given GHC version at some point in time, I would be able to do that for each later point in time (e.g. 1 year later) using the very same GHC version (at least). Isn't that was the PVP was created to accomplish in the first place?
This definition of reproducible builds is exactly what the PVP guarantees. If you define your dependency bounds as implied by the PVP and the packages you depend on follow the PVP your package will continue to build in the future, if it built today*. That doesn't mean that it will always use the exact same versions that were used today, but the commenter didn't suggest that.
There isn't any situation in which the train of thought expressed here is useful in practice, particularly since we know that the PVP does not actually guarantee a package will install.
I wasn't aware. When doesn't PVP guarantee that a package will install?
The useful train of though is to distinguish between applications and libraries and state how dependency freezing is necessary for applications.
This hasn't nothing to do with what's being discussed. * The caveat about instance/module re-export leaks that was mentioned before still applies. -- Johan

i'm currently -1 on this myself too for now.
On Wed, Apr 9, 2014 at 2:01 PM, Johan Tibell
On Wed, Apr 9, 2014 at 6:28 PM, Greg Weber
wrote: Sure, but the commenter did define their exact meaning of reproducible and stated they think that is what the PVP is for. This appears to be the definition of being confused about what the PVP is meant for.
Here's what the commenter said:
You're probably right; Simply stated, I considered "reproducible build" to
mean that if there was a package on Hackage that I could cabal install foobar with a given GHC version at some point in time, I would be able to do that for each later point in time (e.g. 1 year later) using the very same GHC version (at least). Isn't that was the PVP was created to accomplish in the first place?
This definition of reproducible builds is exactly what the PVP guarantees. If you define your dependency bounds as implied by the PVP and the packages you depend on follow the PVP your package will continue to build in the future, if it built today*. That doesn't mean that it will always use the exact same versions that were used today, but the commenter didn't suggest that.
There isn't any situation in which the train of thought expressed here is useful in practice, particularly since we know that the PVP does not actually guarantee a package will install.
I wasn't aware. When doesn't PVP guarantee that a package will install?
The useful train of though is to distinguish between applications and libraries and state how dependency freezing is necessary for applications.
This hasn't nothing to do with what's being discussed.
* The caveat about instance/module re-export leaks that was mentioned before still applies.
-- Johan

On Wed, Apr 9, 2014 at 12:03 PM, Henning Thielemann < schlepptop@henning-thielemann.de> wrote:
Am 09.04.2014 10:47, schrieb Michael Snoyman:
I would like to propose the following changes to the PVP. These are the
same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that blog post.
I see no need to complicate the PVP with distinctions about stable and unstable, published and unpublished packages. My preferred solution is to declare lower and upper version bounds in Build-Depends according to current PVP and ignore these bounds on demand via options that are passed to cabal-install.
There are two concrete needs I'm trying to address: * From a library maintainer standpoint: decreased maintenance overhead. Being able to say `text < 2` or `case-insensitive < 2` means less time spent fiddling with cabal files. * From a library user standpoint: it makes it less likely that you'll run into a case where cabal cannot create a build plan. If package foo places a restrictive upper bound on text of `text < 1.1`, and package bar starts using a new feature in text 1.1 and therefore has a bound `text >= 1.1`, the user won't be able to use foo and bar together until the author of foo bumps the version number. The other needs I previously considered for this was testing newer versions of GHC, but the --allow-newer flag in cabal mostly[1] addresses this issue, so I don't think it's pertinent. Note: this proposal doesn't actually impose a difference in behavior between published and non-published software. The PVP is already clearly not trying to ensure reproducible builds of packages on Hackage. I just want that clarity to extend to non-published code as well, since there seems to be some confusion. Michael [1] http://www.reddit.com/r/haskell/comments/22jlis/proposal_changes_to_the_pvp/...

Hi Michael, On Wed, Apr 09, 2014 at 01:29:18PM +0300, Michael Snoyman wrote:
* From a library maintainer standpoint: decreased maintenance overhead. Being able to say `text < 2` or `case-insensitive < 2` means less time spent fiddling with cabal files.
I think that's mostly a tooling issue. It was my main motivation for writing 'cabal-bounds' (https://github.com/dan-t/cabal-bounds).
* From a library user standpoint: it makes it less likely that you'll run into a case where cabal cannot create a build plan. If package foo places a restrictive upper bound on text of `text < 1.1`, and package bar starts using a new feature in text 1.1 and therefore has a bound `text >= 1.1`, the user won't be able to use foo and bar together until the author of foo bumps the version number.
Yes, that's certainly a problem, but otherwise you also can't be sure that 'text 1.1' didn't introduce a breaking change. So with the new cabal flag '--allow-newer' the user could still try to build with 'text 1.1' and the package could still indicate that it wasn't tried with 'text 1.1', which at the end seems to be the best of both worlds. Greetings, Daniel

On Wed, Apr 9, 2014 at 2:12 PM, Daniel Trstenjak wrote: Hi Michael, On Wed, Apr 09, 2014 at 01:29:18PM +0300, Michael Snoyman wrote: * From a library maintainer standpoint: decreased maintenance overhead.
Being
able to say `text < 2` or `case-insensitive < 2` means less time spent
fiddling
with cabal files. I think that's mostly a tooling issue. It was my main motivation for
writing 'cabal-bounds' (https://github.com/dan-t/cabal-bounds). While that can certainly *help*, the burden is still there. Even something
as trivial as "go to dev machine, edit cabal file to say 1.2 instead of
1.1, cabal sdist, cabal update" takes time. If you want to do it properly
(push to Github, wait for Travis to run test suites) it takes even more
time. * From a library user standpoint: it makes it less likely that you'll
run into
a case where cabal cannot create a build plan. If package foo places a
restrictive upper bound on text of `text < 1.1`, and package bar starts
using a
new feature in text 1.1 and therefore has a bound `text >= 1.1`, the
user won't
be able to use foo and bar together until the author of foo bumps the
version
number. Yes, that's certainly a problem, but otherwise you also can't be sure
that 'text 1.1' didn't introduce a breaking change. With this proposal, you can: we'd establish rules that you *can* depend on
certain API subsets for a longer version range. So if I use that subset, I
can be certain that 1.1 doesn't introduce a breaking change*.
* Unless of course the maintainer violates the agreement he's made with the
users, but that's the same situation we're in right now with PVP upper
bounds in general. So with the new cabal flag '--allow-newer' the user could still try to
build
with 'text 1.1' and the package could still indicate that it wasn't tried
with 'text 1.1', which at the end seems to be the best of both worlds. There are two downsides to this:
* The goal is to simplify this process for end users. If the answer is "use
--allow-newer," we're then (1) expecting end users to know this advice, and
(2) we're removing most of the benefits of upper bounds. Like most things,
either too much or too little is a bad thing. We need meaningful upper
bounds to be enforced.
* --allow-newer conflates two concepts: upper bounds put in place due to a
known breakage, and upper bounds put in place due to unknown changes. This
isn't news: the fact that there's no such distinction in cabal for this has
been brought up in the past. I gave an example of this problem on Reddit[1].
Michael
[1]
http://www.reddit.com/r/haskell/comments/22jlis/proposal_changes_to_the_pvp/...

On Wed, Apr 9, 2014 at 10:47 AM, Michael Snoyman
I would like to propose the following changes to the PVP. These are the same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that blog post.
1. The goal of the PVP needs to be clarified. Its purpose is not to ensure reproducible builds of non-published software, but rather to provide for more reliable builds of libraries on Hackage. Reproducible builds should be handled exclusively through version freezing, the only known technique to actually give the necessary guarantees.
I think this would be fine to add. Since part of the PVP talks about ranges on dependency versions anyway, reproducible build are out the window from the start. It doesn't hurt to add a sentence or two mentioning this. However, checking the current document, the very first paragraph says: "The goal of a versioning system is to inform clients of a package of changes to that package that might affect them, and to provide a way for clients to specify a particular version or range of versions of a dependency that they are compatible with." It already seems pretty clear to me.
2. Upper bounds should not be included on non-upgradeable packages, such as base and template-haskell (are there others?). Alternatively, we should establish some accepted upper bound on these packages, e.g. many people place base < 5 on their code.
A lot of packages break from upgrades to base and template-haskell. I think removing the upper bounds here will lead to a lot of confusion, so I'm -1 on this one.
3. We should be distinguishing between mostly-stable packages and unstable packages. For a package like text, if you simply import Data.Text (Text, pack, reverse), or some other sane subset, there's no need for upper bounds. (Note that this doesn't provide a hard-and-fast rule like the current PVP, but is rather a matter of discretion. Communication between library authors and users (via documentation or other means) would be vital to making this work well.)
This sounds too vague to be an actual policy, so -1.
4. For a package version A.B.C, a bump in A or B indicates some level of breaking change. As an opt-in approach, package authors are free to associated meaning to A and B beyond what the PVP requires. Libraries which use these packages are free to rely on the guarantees provided by package authors when placing upper bounds. (Note that this is very related to point (3).)
Isn't this already the case? Erik

Might there be benefit to combining 3 and 4, making the stable subset
explicit (perhaps supported by tests so as to be machine checkable)?
On Apr 9, 2014 1:47 AM, "Michael Snoyman"
I would like to propose the following changes to the PVP. These are the same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that blog post.
1. The goal of the PVP needs to be clarified. Its purpose is not to ensure reproducible builds of non-published software, but rather to provide for more reliable builds of libraries on Hackage. Reproducible builds should be handled exclusively through version freezing, the only known technique to actually give the necessary guarantees.
2. Upper bounds should not be included on non-upgradeable packages, such as base and template-haskell (are there others?). Alternatively, we should establish some accepted upper bound on these packages, e.g. many people place base < 5 on their code.
3. We should be distinguishing between mostly-stable packages and unstable packages. For a package like text, if you simply import Data.Text (Text, pack, reverse), or some other sane subset, there's no need for upper bounds. (Note that this doesn't provide a hard-and-fast rule like the current PVP, but is rather a matter of discretion. Communication between library authors and users (via documentation or other means) would be vital to making this work well.)
4. For a package version A.B.C, a bump in A or B indicates some level of breaking change. As an opt-in approach, package authors are free to associated meaning to A and B beyond what the PVP requires. Libraries which use these packages are free to rely on the guarantees provided by package authors when placing upper bounds. (Note that this is very related to point (3).)
Discussion period: 3 weeks.
[1] http://www.yesodweb.com/blog/2014/04/proposal-changes-pvp
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Em 09-04-2014 05:47, Michael Snoyman escreveu:
I would like to propose the following changes to the PVP. These are the same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that blog post.
1. The goal of the PVP needs to be clarified. Its purpose is not to ensure reproducible builds of non-published software, but rather to provide for more reliable builds of libraries on Hackage. Reproducible builds should be handled exclusively through version freezing, the only known technique to actually give the necessary guarantees.
2. Upper bounds should not be included on non-upgradeable packages, such as base and template-haskell (are there others?). Alternatively, we should establish some accepted upper bound on these packages, e.g. many people place base < 5 on their code.
3. We should be distinguishing between mostly-stable packages and unstable packages. For a package like text, if you simply import Data.Text (Text, pack, reverse), or some other sane subset, there's no need for upper bounds. (Note that this doesn't provide a hard-and-fast rule like the current PVP, but is rather a matter of discretion. Communication between library authors and users (via documentation or other means) would be vital to making this work well.)
4. For a package version A.B.C, a bump in A or B indicates some level of breaking change. As an opt-in approach, package authors are free to associated meaning to A and B beyond what the PVP requires. Libraries which use these packages are free to rely on the guarantees provided by package authors when placing upper bounds. (Note that this is very related to point (3).)
Discussion period: 3 weeks.
[1] http://www.yesodweb.com/blog/2014/04/proposal-changes-pvp
Needless to say since I'm mentioned on the blog post, +1. -- Felipe.

In addition to the well-state objections elsewhere in this thread, I
think this proposal is too vague and subjective. So it gets a -1 from
me.
Point #1 needs to specify actual text. I'm -1 on the text Michael
gives later because I disagree with that definition of reproducible
builds.
Point numbers 3 and 4 are too vague, subjective, and would cause
contention in the community. Package authors are already free to to
associate extra meaning with A and B. If you're willing to put in
version bounds after this PVP change, I don't see why you shouldn't be
willing to put them in before it.
On Wed, Apr 9, 2014 at 4:47 AM, Michael Snoyman
I would like to propose the following changes to the PVP. These are the same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that blog post.
1. The goal of the PVP needs to be clarified. Its purpose is not to ensure reproducible builds of non-published software, but rather to provide for more reliable builds of libraries on Hackage. Reproducible builds should be handled exclusively through version freezing, the only known technique to actually give the necessary guarantees.
2. Upper bounds should not be included on non-upgradeable packages, such as base and template-haskell (are there others?). Alternatively, we should establish some accepted upper bound on these packages, e.g. many people place base < 5 on their code.
3. We should be distinguishing between mostly-stable packages and unstable packages. For a package like text, if you simply import Data.Text (Text, pack, reverse), or some other sane subset, there's no need for upper bounds. (Note that this doesn't provide a hard-and-fast rule like the current PVP, but is rather a matter of discretion. Communication between library authors and users (via documentation or other means) would be vital to making this work well.)
4. For a package version A.B.C, a bump in A or B indicates some level of breaking change. As an opt-in approach, package authors are free to associated meaning to A and B beyond what the PVP requires. Libraries which use these packages are free to rely on the guarantees provided by package authors when placing upper bounds. (Note that this is very related to point (3).)
Discussion period: 3 weeks.
[1] http://www.yesodweb.com/blog/2014/04/proposal-changes-pvp
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Hello Michael, While I can see merit in some parts[1] of this proposal, I don't agree with this proposal as a whole. Therefore I'm -1 on the proposal in its current form. [1]: E.g. having different upper-bound policies for the special class of the few non-upgradable packages -- however even that has consequences that need to be examined carefully PS: I'd suggest breaking this proposal up into smaller incremental sub-proposals and discuss them one at a time. On 2014-04-09 at 10:47:14 +0200, Michael Snoyman wrote:
I would like to propose the following changes to the PVP. These are the same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that blog post.
1. The goal of the PVP needs to be clarified. Its purpose is not to ensure reproducible builds of non-published software, but rather to provide for more reliable builds of libraries on Hackage. Reproducible builds should be handled exclusively through version freezing, the only known technique to actually give the necessary guarantees.
2. Upper bounds should not be included on non-upgradeable packages, such as base and template-haskell (are there others?). Alternatively, we should establish some accepted upper bound on these packages, e.g. many people place base < 5 on their code.
3. We should be distinguishing between mostly-stable packages and unstable packages. For a package like text, if you simply import Data.Text (Text, pack, reverse), or some other sane subset, there's no need for upper bounds. (Note that this doesn't provide a hard-and-fast rule like the current PVP, but is rather a matter of discretion. Communication between library authors and users (via documentation or other means) would be vital to making this work well.)
4. For a package version A.B.C, a bump in A or B indicates some level of breaking change. As an opt-in approach, package authors are free to associated meaning to A and B beyond what the PVP requires. Libraries which use these packages are free to rely on the guarantees provided by package authors when placing upper bounds. (Note that this is very related to point (3).)
Discussion period: 3 weeks.
[1] http://www.yesodweb.com/blog/2014/04/proposal-changes-pvp _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
-- "Elegance is not optional" -- Richard O'Keefe

I have to say, the responses on this thread are truly confusing. Let's ignore point (3) of my proposal (since it can essentially be subsumed under (4)). Point (2) is clearly a change in the PVP, and boils down to "users get cabal error messages" or "users get GHC error messages." I understand (though strongly disagree with) those opposed to the change I'm proposing there. So let's ignore it. For (1) and (4), the responses vary from support, to opposition, to "that's what the PVP already says." So there's clearly a problem here, and I don't think the problem is in my proposal: people have very different ideas of what the PVP actually expects of us. So forget my proposal for the moment, I want to engage in a thought experiment. What does the current PVP say about the following scenarios: 1. A user is writing an application based on a number of Hackage libraries. He places version bounds following the PVP, e.g. `text >= 1.0 && < 1.1, aeson >= 0.7 && < 0.8`. a. Has he done a good enough job of writing his application? b. Should he have an expectation that, no matter what happens, his software will always build when running `cabal clean && cabal install` (assuming same GHC version and OS)? c. Should he have an expectation that the code will always run in the same way it did when first built? 2. I author a package called foo and release version 1.2 with the statement: "I guarantee that the Foo module will exist and continue to export the foo1 and foo2 functions, with the same type signature, until version 2.0." a. If a user of the foo package only uses the foo1 and foo2 functions, is he "in violation" of the PVP by using a bound on the foo package of `foo
= 1.2 && < 2`?
On Wed, Apr 9, 2014 at 11:47 AM, Michael Snoyman
I would like to propose the following changes to the PVP. These are the same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that blog post.
1. The goal of the PVP needs to be clarified. Its purpose is not to ensure reproducible builds of non-published software, but rather to provide for more reliable builds of libraries on Hackage. Reproducible builds should be handled exclusively through version freezing, the only known technique to actually give the necessary guarantees.
2. Upper bounds should not be included on non-upgradeable packages, such as base and template-haskell (are there others?). Alternatively, we should establish some accepted upper bound on these packages, e.g. many people place base < 5 on their code.
3. We should be distinguishing between mostly-stable packages and unstable packages. For a package like text, if you simply import Data.Text (Text, pack, reverse), or some other sane subset, there's no need for upper bounds. (Note that this doesn't provide a hard-and-fast rule like the current PVP, but is rather a matter of discretion. Communication between library authors and users (via documentation or other means) would be vital to making this work well.)
4. For a package version A.B.C, a bump in A or B indicates some level of breaking change. As an opt-in approach, package authors are free to associated meaning to A and B beyond what the PVP requires. Libraries which use these packages are free to rely on the guarantees provided by package authors when placing upper bounds. (Note that this is very related to point (3).)
Discussion period: 3 weeks.
[1] http://www.yesodweb.com/blog/2014/04/proposal-changes-pvp

On Wed, Apr 9, 2014 at 9:22 PM, Michael Snoyman
So forget my proposal for the moment, I want to engage in a thought experiment. What does the current PVP say about the following scenarios:
1. A user is writing an application based on a number of Hackage libraries. He places version bounds following the PVP, e.g. `text >= 1.0 && < 1.1, aeson >= 0.7 && < 0.8`. a. Has he done a good enough job of writing his application?
That's a kinda vague question. :) Has he made sure that his package will continue to build if there are backwards incompatible changes in text or aeson? Yes.
b. Should he have an expectation that, no matter what happens, his software will always build when running `cabal clean && cabal install` (assuming same GHC version and OS)?
"No matter what happens" is also kinda vague. Assuming no bugs in GHC, Cabal, etc and assuming he's using a hermetic build environment (such as sandboxes), yes. The reason a hermetic build environment is needed is that otherwise unrelated packages might constrain the build environment (i.e. the infamous "building X will break Y" problems of the past.)
c. Should he have an expectation that the code will always run in the same way it did when first built?
No. text or aeson might introduce bugs in minor/patch releases. The code might run differently on different platforms (e.g. due to different word sizes).
2. I author a package called foo and release version 1.2 with the statement: "I guarantee that the Foo module will exist and continue to export the foo1 and foo2 functions, with the same type signature, until version 2.0." a. If a user of the foo package only uses the foo1 and foo2 functions, is he "in violation" of the PVP by using a bound on the foo package of `foo
= 1.2 && < 2`?
No. -- Johan

Am 09.04.2014 21:55, schrieb Johan Tibell:
On Wed, Apr 9, 2014 at 9:22 PM, Michael Snoyman
mailto:michael@snoyman.com> wrote: So forget my proposal for the moment, I want to engage in a thought experiment. What does the current PVP say about the following scenarios:
1. A user is writing an application based on a number of Hackage libraries. He places version bounds following the PVP, e.g. `text >= 1.0 && < 1.1, aeson >= 0.7 && < 0.8`. a. Has he done a good enough job of writing his application?
That's a kinda vague question. :) Has he made sure that his package will continue to build if there are backwards incompatible changes in text or aeson? Yes.
With these version bounds he do not need to expect incompatible changes in "text" or "aeson", he must only be prepared for additions, i.e. he must import explicitly or with qualification.

Here's my view.
1a. I don't know how to answer this question because I have no idea
what "good enough job" means.
1b. Unequivocally yes, we should do our best to support this
expectation to the best of our ability. However we can't state it as
strongly as "no matter what happens". I think we can say "as long as
all of his dependencies are well-behaved, the package should be pretty
likely to build..."
1c. If "always run in the same way" means that it will always be built
with the same set of transitive dependencies, then no.
2a is a bit stickier. I want to say yes, but right now I'll leave
myself open to convincing. My biggest concern in this whole debate is
that users of the foo package should specify some upper bound, because
as Greg has said, if you don't, the probability that the package
builds goes to ZERO (not epsilon) as t goes to infinity. Personally I
like the safe and conservative upper bound of <1.3 because I think
practically it's difficult to make more granular contracts work in
practice and it follows the PVP's clear meaning. If you're committing
to support the same API up to 2.0, why can't you just commit to
supporting that API up to 1.3? The author of foo still has the
flexibility to jump to 2.0 to signal something to the users, and when
that happens, the users can change their bounds appropriately.
On Wed, Apr 9, 2014 at 3:22 PM, Michael Snoyman
I have to say, the responses on this thread are truly confusing. Let's ignore point (3) of my proposal (since it can essentially be subsumed under (4)). Point (2) is clearly a change in the PVP, and boils down to "users get cabal error messages" or "users get GHC error messages." I understand (though strongly disagree with) those opposed to the change I'm proposing there. So let's ignore it.
For (1) and (4), the responses vary from support, to opposition, to "that's what the PVP already says." So there's clearly a problem here, and I don't think the problem is in my proposal: people have very different ideas of what the PVP actually expects of us.
So forget my proposal for the moment, I want to engage in a thought experiment. What does the current PVP say about the following scenarios:
1. A user is writing an application based on a number of Hackage libraries. He places version bounds following the PVP, e.g. `text >= 1.0 && < 1.1, aeson >= 0.7 && < 0.8`. a. Has he done a good enough job of writing his application? b. Should he have an expectation that, no matter what happens, his software will always build when running `cabal clean && cabal install` (assuming same GHC version and OS)? c. Should he have an expectation that the code will always run in the same way it did when first built? 2. I author a package called foo and release version 1.2 with the statement: "I guarantee that the Foo module will exist and continue to export the foo1 and foo2 functions, with the same type signature, until version 2.0." a. If a user of the foo package only uses the foo1 and foo2 functions, is he "in violation" of the PVP by using a bound on the foo package of `foo
= 1.2 && < 2`?
On Wed, Apr 9, 2014 at 11:47 AM, Michael Snoyman
wrote: I would like to propose the following changes to the PVP. These are the same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that blog post.
1. The goal of the PVP needs to be clarified. Its purpose is not to ensure reproducible builds of non-published software, but rather to provide for more reliable builds of libraries on Hackage. Reproducible builds should be handled exclusively through version freezing, the only known technique to actually give the necessary guarantees.
2. Upper bounds should not be included on non-upgradeable packages, such as base and template-haskell (are there others?). Alternatively, we should establish some accepted upper bound on these packages, e.g. many people place base < 5 on their code.
3. We should be distinguishing between mostly-stable packages and unstable packages. For a package like text, if you simply import Data.Text (Text, pack, reverse), or some other sane subset, there's no need for upper bounds. (Note that this doesn't provide a hard-and-fast rule like the current PVP, but is rather a matter of discretion. Communication between library authors and users (via documentation or other means) would be vital to making this work well.)
4. For a package version A.B.C, a bump in A or B indicates some level of breaking change. As an opt-in approach, package authors are free to associated meaning to A and B beyond what the PVP requires. Libraries which use these packages are free to rely on the guarantees provided by package authors when placing upper bounds. (Note that this is very related to point (3).)
Discussion period: 3 weeks.
[1] http://www.yesodweb.com/blog/2014/04/proposal-changes-pvp
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Wed, Apr 9, 2014 at 10:57 PM, MightyByte
Here's my view.
1a. I don't know how to answer this question because I have no idea what "good enough job" means. 1b. Unequivocally yes, we should do our best to support this expectation to the best of our ability. However we can't state it as strongly as "no matter what happens". I think we can say "as long as all of his dependencies are well-behaved, the package should be pretty likely to build..."
And this is where I think the PVP is doing a disservice with its current wording. Users have this expectation, but it doesn't actually hold up in reality. Reasons why it may fail include: * Typeclass instance leaking from transitive dependencies. * Module reexports leaking from transitive dependencies. * Someone made a mistake in an upload to Hackage (yes, that really does happy, and it's not that uncommon). * The package you depend on doesn't itself follow the PVP, or so on down the stack. So my point is: even though the *goal* of the PVP is to provide this guarantee, it *doesn't* provide this guarantee. Since we have a clear alternative that does provide this guarantee (version freezing), I think we should make it clear that the PVP does not solve all problems, and version freezing should be used.
1c. If "always run in the same way" means that it will always be built with the same set of transitive dependencies, then no. 2a is a bit stickier. I want to say yes, but right now I'll leave myself open to convincing. My biggest concern in this whole debate is that users of the foo package should specify some upper bound, because as Greg has said, if you don't, the probability that the package builds goes to ZERO (not epsilon) as t goes to infinity. Personally I like the safe and conservative upper bound of <1.3 because I think practically it's difficult to make more granular contracts work in practice and it follows the PVP's clear meaning. If you're committing to support the same API up to 2.0, why can't you just commit to supporting that API up to 1.3? The author of foo still has the flexibility to jump to 2.0 to signal something to the users, and when that happens, the users can change their bounds appropriately.
This sounds more like a personal opinion response rather than interpretation of the current PVP. I find it troubling that we're holding up the PVP as the standard that all packages should adhere to, and yet it's hard to get an answer on something like this. The point of giving a guarantee to 2.0 is that it involves less package churn, which is a maintenance burden for developers, and removes extra delays waiting for maintainers to bump version bounds, which can lead to Hackage bifurcation. But again, my real question is: what does the PVP say right now? We can't even have a real discussion of my initial proposal if no one can agree on what the PVP says about that situation right now! Michael
I have to say, the responses on this thread are truly confusing. Let's ignore point (3) of my proposal (since it can essentially be subsumed under (4)). Point (2) is clearly a change in the PVP, and boils down to "users get cabal error messages" or "users get GHC error messages." I understand (though strongly disagree with) those opposed to the change I'm proposing there. So let's ignore it.
For (1) and (4), the responses vary from support, to opposition, to "that's what the PVP already says." So there's clearly a problem here, and I don't think the problem is in my proposal: people have very different ideas of what the PVP actually expects of us.
So forget my proposal for the moment, I want to engage in a thought experiment. What does the current PVP say about the following scenarios:
1. A user is writing an application based on a number of Hackage
He places version bounds following the PVP, e.g. `text >= 1.0 && < 1.1, aeson >= 0.7 && < 0.8`. a. Has he done a good enough job of writing his application? b. Should he have an expectation that, no matter what happens, his software will always build when running `cabal clean && cabal install` (assuming same GHC version and OS)? c. Should he have an expectation that the code will always run in the same way it did when first built? 2. I author a package called foo and release version 1.2 with the statement: "I guarantee that the Foo module will exist and continue to export the foo1 and foo2 functions, with the same type signature, until version 2.0." a. If a user of the foo package only uses the foo1 and foo2 functions, is he "in violation" of the PVP by using a bound on the foo package of `foo
= 1.2 && < 2`?
On Wed, Apr 9, 2014 at 11:47 AM, Michael Snoyman
wrote: I would like to propose the following changes to the PVP. These are the same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that
blog
post.
1. The goal of the PVP needs to be clarified. Its purpose is not to ensure reproducible builds of non-published software, but rather to provide for more reliable builds of libraries on Hackage. Reproducible builds should be handled exclusively through version freezing, the only known technique to actually give the necessary guarantees.
2. Upper bounds should not be included on non-upgradeable packages, such as base and template-haskell (are there others?). Alternatively, we should establish some accepted upper bound on these packages, e.g. many people place base < 5 on their code.
3. We should be distinguishing between mostly-stable packages and unstable packages. For a package like text, if you simply import Data.Text (Text, pack, reverse), or some other sane subset, there's no need for upper bounds. (Note that this doesn't provide a hard-and-fast rule like the current PVP, but is rather a matter of discretion. Communication between library authors and users (via documentation or other means) would be vital to making
work well.)
4. For a package version A.B.C, a bump in A or B indicates some level of breaking change. As an opt-in approach, package authors are free to associated meaning to A and B beyond what the PVP requires. Libraries which use these packages are free to rely on the guarantees provided by
authors when placing upper bounds. (Note that this is very related to
On Wed, Apr 9, 2014 at 3:22 PM, Michael Snoyman
wrote: libraries. this package point (3).)
Discussion period: 3 weeks.
[1] http://www.yesodweb.com/blog/2014/04/proposal-changes-pvp
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Wed, Apr 9, 2014 at 11:13 PM, Michael Snoyman
And this is where I think the PVP is doing a disservice with its current wording. Users have this expectation, but it doesn't actually hold up in reality. Reasons why it may fail include:
* Typeclass instance leaking from transitive dependencies. * Module reexports leaking from transitive dependencies.
We're aware of these two issues and the PVP page mentions one of them. It should also mention the other together with a workaround. These issues seem rare however. I've never run into them.
* Someone made a mistake in an upload to Hackage (yes, that really does happy, and it's not that uncommon).
Nothing will help you in this case as a library author, as you cannot freeze your deps in your .cabal file, which would be the only option if you're worried about this.
* The package you depend on doesn't itself follow the PVP, or so on down the stack.
Sure. You don't get benefits from the PVP if people don't follow the PVP. You second sentence suggests that there are other failure modes, I'd love to know about them so we can discuss them. As I see it now there are two real failure modes (the first two you listed), one we can fix with tooling (make sure people follow the PVP), and one I don't think it's worth caring about (accidental uploads of broken stuff.)
So my point is: even though the *goal* of the PVP is to provide this guarantee, it *doesn't* provide this guarantee. Since we have a clear alternative that does provide this guarantee (version freezing), I think we should make it clear that the PVP does not solve all problems, and version freezing should be used.
It provides the guarantee with the exceptions of you two first points, which don't worry me much as I've never seen them happen outside the Yesod ecosystem (i.e. they seem rare.) Version freezing is orthogonal to the stuff the PVP talks about so I think we will just confuse users by talking about it in the PVP. Perhaps you could put together a little "how to build" stuff guide and raise awareness of the issue that way? -- Johan

On Thu, Apr 10, 2014 at 12:30 AM, Johan Tibell
On Wed, Apr 9, 2014 at 11:13 PM, Michael Snoyman
wrote: And this is where I think the PVP is doing a disservice with its current wording. Users have this expectation, but it doesn't actually hold up in reality. Reasons why it may fail include:
* Typeclass instance leaking from transitive dependencies. * Module reexports leaking from transitive dependencies.
We're aware of these two issues and the PVP page mentions one of them. It should also mention the other together with a workaround. These issues seem rare however. I've never run into them.
* Someone made a mistake in an upload to Hackage (yes, that really does happy, and it's not that uncommon).
Nothing will help you in this case as a library author, as you cannot freeze your deps in your .cabal file, which would be the only option if you're worried about this.
* The package you depend on doesn't itself follow the PVP, or so on down the stack.
Sure. You don't get benefits from the PVP if people don't follow the PVP.
You second sentence suggests that there are other failure modes, I'd love to know about them so we can discuss them.
The point is that we don't know. The typeclass/reexport business went undocumented until a few weeks ago, which took at least a few people by surprise big time. Maybe those are the only issues that remain in the PVP.
As I see it now there are two real failure modes (the first two you listed), one we can fix with tooling (make sure people follow the PVP), and one I don't think it's worth caring about (accidental uploads of broken stuff.)
But as has been mentioned elsewhere, the accidental uploads is far worse than it seems at first, since cabal can backtrack and continue using the bad version! If I upload foo-1.0.0.1 that mistakenly says it works with bar 1.1, and then issue a point release foo-1.0.0.2 that puts the upper bound back on bar, cabal will no longer get any PVP upper bound benefits, since it will simply try to use foo-1.0.0.1. And if we're OK saying we'll retroactively change foo-1.0.0.1, we can just as easily retroactively change packages to add in upper bounds.
So my point is: even though the *goal* of the PVP is to provide this
guarantee, it *doesn't* provide this guarantee. Since we have a clear alternative that does provide this guarantee (version freezing), I think we should make it clear that the PVP does not solve all problems, and version freezing should be used.
It provides the guarantee with the exceptions of you two first points, which don't worry me much as I've never seen them happen outside the Yesod ecosystem (i.e. they seem rare.) Version freezing is orthogonal to the stuff the PVP talks about so I think we will just confuse users by talking about it in the PVP. Perhaps you could put together a little "how to build" stuff guide and raise awareness of the issue that way?
I think it's obvious that no amendment to the text of the PVP will be accepted by this list, so educating users that they're using their tools incorrectly clearly won't be happening on that page. Michael

On 10/04/2014 05:30, Michael Snoyman wrote:
* Module reexports leaking from transitive dependencies.
Shouldn't we just be saying "don't reexport entire modules from other packages"? Is there a scenario where this is useful? One scenario I can see is perhaps inside groups of packages maintained by the same author or in the same source tree, but then the author can bump all the packages in sync if necessary.
I think it's obvious that no amendment to the text of the PVP will be accepted by this list, so educating users that they're using their tools incorrectly clearly won't be happening on that page.
Didn't Johan get an amendment agreed a few weeks ago? I think your current amendments will have difficulty because they are based on premises that many people disagree with, but that doesn't mean that no amendments at all are possible. Cheers, Ganesh

On Thu, Apr 10, 2014 at 9:19 AM, Ganesh Sittampalam
On 10/04/2014 05:30, Michael Snoyman wrote:
* Module reexports leaking from transitive dependencies.
Shouldn't we just be saying "don't reexport entire modules from other packages"? Is there a scenario where this is useful? One scenario I can see is perhaps inside groups of packages maintained by the same author or in the same source tree, but then the author can bump all the packages in sync if necessary.
I think it's obvious that no amendment to the text of the PVP will be accepted by this list, so educating users that they're using their tools incorrectly clearly won't be happening on that page.
Didn't Johan get an amendment agreed a few weeks ago? I think your current amendments will have difficulty because they are based on premises that many people disagree with, but that doesn't mean that no amendments at all are possible.
I should have clarified: no amendment that points out flaws in the PVP. My premise is simple: the PVP is a useful tool, but does not address all cases. Since people seem to mistakenly believe that it will protect them from all build problems, the text should be amended to make that clear. Every attempt I've made to come up with text that is acceptable to this list has been met with resistance. If someone else can come up with a modification that is acceptable, great. But I'm not going to continue trying, and will instead try to inform people through other channels that they need to use something more than the PVP if they want reproducible builds. Michael

On 10/04/2014 07:24, Michael Snoyman wrote:
Didn't Johan get an amendment agreed a few weeks ago? I think your current amendments will have difficulty because they are based on premises that many people disagree with, but that doesn't mean that no amendments at all are possible.
I should have clarified: no amendment that points out flaws in the PVP. My premise is simple: the PVP is a useful tool, but does not address all cases. Since people seem to mistakenly believe that it will protect them from all build problems, the text should be amended to make that clear. Every attempt I've made to come up with text that is acceptable to this list has been met with resistance. If someone else can come up with a modification that is acceptable, great. But I'm not going to continue trying, and will instead try to inform people through other channels that they need to use something more than the PVP if they want reproducible builds.
The problem with instance removal is already documented in section 2.3. The "Rationale" section is perhaps slightly inaccurate in that it says "and tells a client how to write a dependency that means their package will not try to compile against an incompatible dependency". Perhaps we could just change "means" to "in most cases means"? I think the more general communication that the PVP isn't X where it doesn't explicitly claim to be X are indeed best kept to other channels in order to keep the whole thing brief. Ganesh

On Thu, Apr 10, 2014 at 8:24 AM, Michael Snoyman
On Thu, Apr 10, 2014 at 9:19 AM, Ganesh Sittampalam
wrote: On 10/04/2014 05:30, Michael Snoyman wrote:> I think it's obvious that no amendment to the text of the PVP will be
accepted by this list, so educating users that they're using their tools incorrectly clearly won't be happening on that page.
Didn't Johan get an amendment agreed a few weeks ago? I think your current amendments will have difficulty because they are based on premises that many people disagree with, but that doesn't mean that no amendments at all are possible.
I should have clarified: no amendment that points out flaws in the PVP. My premise is simple: the PVP is a useful tool, but does not address all cases. Since people seem to mistakenly believe that it will protect them from all build problems, the text should be amended to make that clear. Every attempt I've made to come up with text that is acceptable to this list has been met with resistance. If someone else can come up with a modification that is acceptable, great. But I'm not going to continue trying, and will instead try to inform people through other channels that they need to use something more than the PVP if they want reproducible builds.
There are already text describing flaws in the PVP (e.g. the aforementioned instance leak and issues with adding new modules). We should add something about module exports. What the PVP page doesn't address, and I don't think it should address, are orthogonal issues i.e. issues related to reproducible builds*. * Here using the meaning: building using exactly the same bits.

On 10 April 2014 16:19, Ganesh Sittampalam
On 10/04/2014 05:30, Michael Snoyman wrote:
* Module reexports leaking from transitive dependencies.
Shouldn't we just be saying "don't reexport entire modules from other packages"? Is there a scenario where this is useful? One scenario I can see is perhaps inside groups of packages maintained by the same author or in the same source tree, but then the author can bump all the packages in sync if necessary.
Re-exporting modules like Control.Applicative for parsing libraries? Admittedly, this is for a module in base rather than a third-party library.
I think it's obvious that no amendment to the text of the PVP will be accepted by this list, so educating users that they're using their tools incorrectly clearly won't be happening on that page.
Didn't Johan get an amendment agreed a few weeks ago? I think your current amendments will have difficulty because they are based on premises that many people disagree with, but that doesn't mean that no amendments at all are possible.
Cheers,
Ganesh
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
-- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com http://IvanMiljenovic.wordpress.com

On 10/04/2014 07:30, Ivan Lazar Miljenovic wrote:
On 10 April 2014 16:19, Ganesh Sittampalam
wrote: On 10/04/2014 05:30, Michael Snoyman wrote:
* Module reexports leaking from transitive dependencies.
Shouldn't we just be saying "don't reexport entire modules from other packages"? Is there a scenario where this is useful? One scenario I can see is perhaps inside groups of packages maintained by the same author or in the same source tree, but then the author can bump all the packages in sync if necessary.
Re-exporting modules like Control.Applicative for parsing libraries?
I'd expect users to import Control.Applicative directly in that case, but I haven't surveyed what existing parsing libraries do in practice. Thinking about this more, as a user of a package Y I think I'd be confused if it re-exported symbols that are defined in package X at all even if done with an explicit export list rather than an entire module. It would take some effort to check if they are actually the same as those in X and therefore won't cause clashes if I also import them from X directly for other purposes. It would be interesting to check how common this is, I guess by scanning hackage. Cheers, Ganesh

Am 10.04.2014 08:43, schrieb Ganesh Sittampalam:
Thinking about this more, as a user of a package Y I think I'd be confused if it re-exported symbols that are defined in package X at all even if done with an explicit export list rather than an entire module.
It would take some effort to check if they are actually the same as those in X and therefore won't cause clashes if I also import them from X directly for other purposes.
I think we cannot rely on the fact that A.X and B.X identify the same entity. It may be true today, but may change in future. Thus I would also not recommend re-exporting single identifiers from other packages.

On 2014-04-10 at 08:19:10 +0200, Ganesh Sittampalam wrote:
On 10/04/2014 05:30, Michael Snoyman wrote:
* Module reexports leaking from transitive dependencies.
Shouldn't we just be saying "don't reexport entire modules from other packages"? Is there a scenario where this is useful? One scenario I can see is perhaps inside groups of packages maintained by the same author or in the same source tree, but then the author can bump all the packages in sync if necessary.
Btw, if one does re-export entire modules (w/o filtering via import/export lists), you simply have to put an upper bound at the minor-version level of the package you're re-exporting. Maybe a note making that more explicit would be sensible (IMO it's actually just an implication of the current PVP but making it more obvious wouldn't hurt).

On Thu, Apr 10, 2014 at 6:30 AM, Michael Snoyman
But as has been mentioned elsewhere, the accidental uploads is far worse than it seems at first, since cabal can backtrack and continue using the bad version! If I upload foo-1.0.0.1 that mistakenly says it works with bar 1.1, and then issue a point release foo-1.0.0.2 that puts the upper bound back on bar, cabal will no longer get any PVP upper bound benefits, since it will simply try to use foo-1.0.0.1.
Isn't this fixed to a large degree by deprecating the bad version on hackage? I also hope the library maintainer would release foo-1.0.0.3 that properly builds with bar 1.1. This can still cause issues, but I think it should be solved with tooling.

On 4/9/14, 5:13 PM, Michael Snoyman wrote:
And this is where I think the PVP is doing a disservice with its current wording. Users have this expectation, but it doesn't actually hold up in reality. Reasons why it may fail include:
* Typeclass instance leaking from transitive dependencies. * Module reexports leaking from transitive dependencies. * Someone made a mistake in an upload to Hackage (yes, that really does happy, and it's not that uncommon). * The package you depend on doesn't itself follow the PVP, or so on down the stack.
So my point is: even though the *goal* of the PVP is to provide this guarantee, it *doesn't* provide this guarantee. Since we have a clear alternative that does provide this guarantee (version freezing), I think we should make it clear that the PVP does not solve all problems, and version freezing should be used.
Along the same lines I am concerned about the expectation of security provided by SSL. As the recent Heartbleed bug shows, we have an expectation that we have security, but this may fail in practice. As such, even though the *goal* of SSL is to provide this guarantee, it *doesn't* provide this guarantee, and furthermore it is a pain to comply with, certificates are expensive, etc. Since we have a clear alternative that does provide this guarantee (one-time-pads coupled with dead drops and a system of code phrases), I think we should make it clear that public key encryption does not solve all problems, and other techniques should be used. Cheers, Gershom

On Thu, Apr 10, 2014 at 2:51 AM, Gershom Bazerman
On 4/9/14, 5:13 PM, Michael Snoyman wrote:
And this is where I think the PVP is doing a disservice with its current wording. Users have this expectation, but it doesn't actually hold up in reality. Reasons why it may fail include:
* Typeclass instance leaking from transitive dependencies. * Module reexports leaking from transitive dependencies. * Someone made a mistake in an upload to Hackage (yes, that really does happy, and it's not that uncommon). * The package you depend on doesn't itself follow the PVP, or so on down the stack.
So my point is: even though the *goal* of the PVP is to provide this guarantee, it *doesn't* provide this guarantee. Since we have a clear alternative that does provide this guarantee (version freezing), I think we should make it clear that the PVP does not solve all problems, and version freezing should be used.
Along the same lines I am concerned about the expectation of security provided by SSL. As the recent Heartbleed bug shows, we have an expectation that we have security, but this may fail in practice. As such, even though the *goal* of SSL is to provide this guarantee, it *doesn't* provide this guarantee, and furthermore it is a pain to comply with, certificates are expensive, etc. Since we have a clear alternative that does provide this guarantee (one-time-pads coupled with dead drops and a system of code phrases), I think we should make it clear that public key encryption does not solve all problems, and other techniques should be used.
That's a flawed analogy, since: 1. I never suggested not using the PVP. 2. The *additional* tool I recommended (version freezing) is going to be cheap to use, since it's included in the tool everyone's already using (cabal). 3. Freezing has additional benefits not covered by the PVP, which we should be encouraging users to take advantage of anyway. A better analogy would be: Given that SSL has some flaws: * Possible buggy implementation * Possible MITM attack * Possible untrustworthy server It's best not to send passwords over SSL in plaintext. Therefore, we recommend that when sending passwords, you use a client-side hashing scheme to avoid sending sensitive data to the server. This transport should still occur over SSL. Michael

that analogue has a flaw, CAs :), the MITM to rule them all, one time pads
are still needed to resolve that.
-2
On Thu, Apr 10, 2014 at 12:33 AM, Michael Snoyman
On Thu, Apr 10, 2014 at 2:51 AM, Gershom Bazerman
wrote: On 4/9/14, 5:13 PM, Michael Snoyman wrote:
And this is where I think the PVP is doing a disservice with its current wording. Users have this expectation, but it doesn't actually hold up in reality. Reasons why it may fail include:
* Typeclass instance leaking from transitive dependencies. * Module reexports leaking from transitive dependencies. * Someone made a mistake in an upload to Hackage (yes, that really does happy, and it's not that uncommon). * The package you depend on doesn't itself follow the PVP, or so on down the stack.
So my point is: even though the *goal* of the PVP is to provide this guarantee, it *doesn't* provide this guarantee. Since we have a clear alternative that does provide this guarantee (version freezing), I think we should make it clear that the PVP does not solve all problems, and version freezing should be used.
Along the same lines I am concerned about the expectation of security provided by SSL. As the recent Heartbleed bug shows, we have an expectation that we have security, but this may fail in practice. As such, even though the *goal* of SSL is to provide this guarantee, it *doesn't* provide this guarantee, and furthermore it is a pain to comply with, certificates are expensive, etc. Since we have a clear alternative that does provide this guarantee (one-time-pads coupled with dead drops and a system of code phrases), I think we should make it clear that public key encryption does not solve all problems, and other techniques should be used.
That's a flawed analogy, since:
1. I never suggested not using the PVP. 2. The *additional* tool I recommended (version freezing) is going to be cheap to use, since it's included in the tool everyone's already using (cabal). 3. Freezing has additional benefits not covered by the PVP, which we should be encouraging users to take advantage of anyway.
A better analogy would be: Given that SSL has some flaws:
* Possible buggy implementation * Possible MITM attack * Possible untrustworthy server
It's best not to send passwords over SSL in plaintext. Therefore, we recommend that when sending passwords, you use a client-side hashing scheme to avoid sending sensitive data to the server. This transport should still occur over SSL.
Michael
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Thu, Apr 10, 2014 at 12:33 AM, Michael Snoyman
On Thu, Apr 10, 2014 at 2:51 AM, Gershom Bazerman
wrote: On 4/9/14, 5:13 PM, Michael Snoyman wrote:
And this is where I think the PVP is doing a disservice with its current wording. Users have this expectation, but it doesn't actually hold up in reality. Reasons why it may fail include:
* Typeclass instance leaking from transitive dependencies. * Module reexports leaking from transitive dependencies. * Someone made a mistake in an upload to Hackage (yes, that really does happy, and it's not that uncommon). * The package you depend on doesn't itself follow the PVP, or so on down the stack.
So my point is: even though the *goal* of the PVP is to provide this guarantee, it *doesn't* provide this guarantee. Since we have a clear alternative that does provide this guarantee (version freezing), I think we should make it clear that the PVP does not solve all problems, and version freezing should be used.
Along the same lines I am concerned about the expectation of security provided by SSL. As the recent Heartbleed bug shows, we have an expectation that we have security, but this may fail in practice. As such, even though the *goal* of SSL is to provide this guarantee, it *doesn't* provide this guarantee, and furthermore it is a pain to comply with, certificates are expensive, etc. Since we have a clear alternative that does provide this guarantee (one-time-pads coupled with dead drops and a system of code phrases), I think we should make it clear that public key encryption does not solve all problems, and other techniques should be used.
That's a flawed analogy, since:
It's a perfect analogy because it uses your EXACT same wording. You just don't seem to want to see it.
1. I never suggested not using the PVP.
Neither did he. Your words could be interpreted as suggesting not using the PVP just as easily as you interpreted his words as suggesting not using SSL.
2. The *additional* tool I recommended (version freezing) is going to be cheap to use, since it's included in the tool everyone's already using (cabal).
The additional tool you recommended actually is completely orthogonal to the original issue just as one-time-pads and dead drops are completely orthogonal to the goal of SSL. Multiple people have mentioned this over and over again in this thread but you don't seem to be listening.
3. Freezing has additional benefits not covered by the PVP, which we should be encouraging users to take advantage of anyway.
One-time-pads have additional benefits not covered by SSL, which (if
the cost-benefit is acceptable) we should be encouraging users to take
advantage of anyway.
On Thu, Apr 10, 2014 at 12:33 AM, Michael Snoyman
On Thu, Apr 10, 2014 at 2:51 AM, Gershom Bazerman
wrote: On 4/9/14, 5:13 PM, Michael Snoyman wrote:
And this is where I think the PVP is doing a disservice with its current wording. Users have this expectation, but it doesn't actually hold up in reality. Reasons why it may fail include:
* Typeclass instance leaking from transitive dependencies. * Module reexports leaking from transitive dependencies. * Someone made a mistake in an upload to Hackage (yes, that really does happy, and it's not that uncommon). * The package you depend on doesn't itself follow the PVP, or so on down the stack.
So my point is: even though the *goal* of the PVP is to provide this guarantee, it *doesn't* provide this guarantee. Since we have a clear alternative that does provide this guarantee (version freezing), I think we should make it clear that the PVP does not solve all problems, and version freezing should be used.
Along the same lines I am concerned about the expectation of security provided by SSL. As the recent Heartbleed bug shows, we have an expectation that we have security, but this may fail in practice. As such, even though the *goal* of SSL is to provide this guarantee, it *doesn't* provide this guarantee, and furthermore it is a pain to comply with, certificates are expensive, etc. Since we have a clear alternative that does provide this guarantee (one-time-pads coupled with dead drops and a system of code phrases), I think we should make it clear that public key encryption does not solve all problems, and other techniques should be used.
That's a flawed analogy, since:
1. I never suggested not using the PVP. 2. The *additional* tool I recommended (version freezing) is going to be cheap to use, since it's included in the tool everyone's already using (cabal). 3. Freezing has additional benefits not covered by the PVP, which we should be encouraging users to take advantage of anyway.
A better analogy would be: Given that SSL has some flaws:
* Possible buggy implementation * Possible MITM attack * Possible untrustworthy server
It's best not to send passwords over SSL in plaintext. Therefore, we recommend that when sending passwords, you use a client-side hashing scheme to avoid sending sensitive data to the server. This transport should still occur over SSL.
Michael
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Hi Michael,
The *additional* tool I recommended (version freezing) is going to be cheap to use [and] has additional benefits not covered by the PVP, which we should be encouraging users to take advantage of anyway.
my understanding is that "version freezing" means to over-specify the restrictions on build inputs, i.e. to require that dependencies exist in a specific version instead of any version that lies in a given version range. If I misunderstood what you mean, then please correct me! My experience with version freezing (over-specified dependency restrictions) is that you it invariably leads to a situation where packages A and B mutually exclude each other because they require C==1.0.0.1 and C==1.0.0.2, respectively. This might a lesser problem for developers hacking away in their project-local Cabal sandboxes, but for people who try to maintain a consistent package set that's used to distribute binary packages to their users, this is a nightmare, because our lives become significantly more complicated if we have to keep several versions of the same packages around -- especially if those packages are near the root of the dependency tree. In fact, your habit of doing that has eventually led NixOS to the development of jailbreak-cabal [1], a tool that automatically removes all dependency restrictions from a Cabal file to undo the "version freeze", and I dare say that the vast majority of build problems we run into while trying to upgrade a package can be solved by running that tool. So, if a "version freeze" really is what I think it is, then I can't say that I like the idea of encouraging other people to pick up that habit. Just my 2 cents, Peter

On Fri, Apr 11, 2014 at 10:57 AM, Peter Simons
Hi Michael,
The *additional* tool I recommended (version freezing) is going to be cheap to use [and] has additional benefits not covered by the PVP, which we should be encouraging users to take advantage of anyway.
my understanding is that "version freezing" means to over-specify the restrictions on build inputs, i.e. to require that dependencies exist in a specific version instead of any version that lies in a given version range. If I misunderstood what you mean, then please correct me!
version freezing is for application developers, not library distributors like yourself.

Hi Greg,
Version freezing is for application developers, not library distributors like yourself.
distributions ship both libraries and applications, and the applications are built with the set of libraries available within the distribution, of course. Because of this, version freezing in applications affects distributors very much, because we have to include the dependencies in exactly those versions that the application developer fancied -- instead of being able to chose from a range of versions that are convenient for the purposes of the distribution. I have packaged Haskell libraries and applications in ArchLinux and NixOS for the last 5+ years, and please believe me when I say that "version freezing" has caused me a lot of trouble in that time -- so much that we've developed tools to automatically undo it. Best regards, Peter

distributing applications compiled from source code (maybe you are
referring to XMonad?) is an interesting middle ground where there are pros
and cons to freezing. By application developer I really mean someone that
is only sharing an application with other team members that are developing
the application.
On Fri, Apr 11, 2014 at 12:51 PM, Peter Simons
Hi Greg,
Version freezing is for application developers, not library distributors like yourself.
distributions ship both libraries and applications, and the applications are built with the set of libraries available within the distribution, of course. Because of this, version freezing in applications affects distributors very much, because we have to include the dependencies in exactly those versions that the application developer fancied -- instead of being able to chose from a range of versions that are convenient for the purposes of the distribution.
I have packaged Haskell libraries and applications in ArchLinux and NixOS for the last 5+ years, and please believe me when I say that "version freezing" has caused me a lot of trouble in that time -- so much that we've developed tools to automatically undo it.
Best regards, Peter
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Wed, Apr 9, 2014 at 5:13 PM, Michael Snoyman
On Wed, Apr 9, 2014 at 10:57 PM, MightyByte
wrote: Here's my view.
1a. I don't know how to answer this question because I have no idea what "good enough job" means. 1b. Unequivocally yes, we should do our best to support this expectation to the best of our ability. However we can't state it as strongly as "no matter what happens". I think we can say "as long as all of his dependencies are well-behaved, the package should be pretty likely to build..."
And this is where I think the PVP is doing a disservice with its current wording. Users have this expectation, but it doesn't actually hold up in reality. Reasons why it may fail include:
The PVP states: "What is missing from this picture is a policy that tells the library developer how to set their version numbers, and tells a client how to write a dependency that means their package will not try to compile against an incompatible dependency." That's pretty clear to me. The disservice is insisting that a complex gray issue is black and white by using phrases like "no matter what happens". It's also clear that this is Haskell we're talking about. We value purity, referential transparency, and controlling side effects. When I translate those ideas to the package world I tend to think that if I've gone to the trouble to get code working today, I want to maximize the probability that it will work at any point in the future.
* Typeclass instance leaking from transitive dependencies. * Module reexports leaking from transitive dependencies. * Someone made a mistake in an upload to Hackage (yes, that really does happy, and it's not that uncommon). * The package you depend on doesn't itself follow the PVP, or so on down the stack.
So my point is: even though the *goal* of the PVP is to provide this guarantee, it *doesn't* provide this guarantee. Since we have a clear alternative that does provide this guarantee (version freezing), I think we should make it clear that the PVP does not solve all problems, and version freezing should be used.
Nowhere does the PVP state a goal of guaranteeing anything, so this is a straw man and a completely invalid point in this discussion. In fact, the wording makes it pretty clear that there are no guarantees. In a perfect world the PVP would ensure that code that builds today will build for all time while allowing for some variation in build plans to allow small patches and bug fixes to be included in your already working code. Alas, we don't live in that world. But that doesn't mean that we shouldn't try to get as close to that state of affairs as we can. As others have mentioned, version freezing is a completely orthogonal issue, otherwise we wouldn't even have the notion of bounds to begin with. We would simply have cabal files that specify a single version and be done. The whole point of the PVP is to NOT lock down to a single version. You want to have simple backwards compatible dependency bug fixes automatically work with your code.
1c. If "always run in the same way" means that it will always be built with the same set of transitive dependencies, then no. 2a is a bit stickier. I want to say yes, but right now I'll leave myself open to convincing. My biggest concern in this whole debate is that users of the foo package should specify some upper bound, because as Greg has said, if you don't, the probability that the package builds goes to ZERO (not epsilon) as t goes to infinity. Personally I like the safe and conservative upper bound of <1.3 because I think practically it's difficult to make more granular contracts work in practice and it follows the PVP's clear meaning. If you're committing to support the same API up to 2.0, why can't you just commit to supporting that API up to 1.3? The author of foo still has the flexibility to jump to 2.0 to signal something to the users, and when that happens, the users can change their bounds appropriately.
This sounds more like a personal opinion response rather than interpretation of the current PVP. I find it troubling that we're holding up the PVP as the standard that all packages should adhere to, and yet it's hard to get an answer on something like this.
Here's what the PVP says: "When publishing a Cabal package, you should ensure that your dependencies in the build-depends field are accurate. This means specifying not only lower bounds, but also upper bounds on every dependency. At some point in the future, Hackage may refuse to accept packages that do not follow this convention." I'm pretty sure this is why Johan answered "no" to your question of whether it violates the PVP. So ok, by the letter of the PVP I guess this isn't a violation. But it depends on what point in time we're talking about. If 1.2 is the most recent version, then the PVP points out that having an upper bound of < 1.3 carries a little risk with it. Using an upper bound of 2.0 at that point in time carries a lot more risk because the PVP (the closest thing we have to a standard) allows for breakages within that bound, and the fact is that there's no guarantee the package will ever get to 2.0. If however the currently released version is 2.0 and you have discovered that it breaks your package, then that version bound is fine. So at one point in time `foo >= 1.2 && < 2` is quite risky, but at another it's the right thing. For awhile now I've been advocating a new feature that allows us to specify a bound of
The point of giving a guarantee to 2.0 is that it involves less package churn, which is a maintenance burden for developers, and removes extra delays waiting for maintainers to bump version bounds, which can lead to Hackage bifurcation.
Less package churn? Package churn is solely related to the number of backwards-incompatible changes per unit time and nothing else. I don't care what version numbers you're using or whether you're following the PVP or not. If you're breaking backwards compatibility, then you're churning, pure and simple. The PVP already gives a guarantee that those functions should stay the same up to 1.3. So a contract with your users that those functions won't change until 2.0 is effectively no different from having a contract that they won't change until 1.3. It's just an arbitrary number. If you haven't changed the API, then don't bump to 1.3. The only difference comes if you want to change some functions, but not others. And there's simply no good way to draw that distinction barring MUCH better static analysis tools that do the full API check for you.

On 09/04/2014 09:47, Michael Snoyman wrote:
I would like to propose the following changes to the PVP. These are the same changes that I recently published on the Yesod blog[1]. For more information on the motivations behind these changes, please see that blog post.
1. The goal of the PVP needs to be clarified. Its purpose is not to ensure reproducible builds of non-published software, but rather to provide for more reliable builds of libraries on Hackage. Reproducible builds should be handled exclusively through version freezing, the only known technique to actually give the necessary guarantees.
-1: as discussed in this thread, this seems to be based on a strawman. I am also against any mention of version-freezing in the PVP as I think it is an orthogonal concept.
3. We should be distinguishing between mostly-stable packages and unstable packages. For a package like text, if you simply import Data.Text (Text, pack, reverse), or some other sane subset, there's no need for upper bounds. (Note that this doesn't provide a hard-and-fast rule like the current PVP, but is rather a matter of discretion. Communication between library authors and users (via documentation or other means) would be vital to making this work well.)
-1: I don't want to have to read 20 sets of documentation to set the 20 dependencies in my package.
4. For a package version A.B.C, a bump in A or B indicates some level of breaking change. As an opt-in approach, package authors are free to associated meaning to A and B beyond what the PVP requires. Libraries which use these packages are free to rely on the guarantees provided by package authors when placing upper bounds. (Note that this is very related to point (3).)
-1: this doesn't add anything to the PVP Ganesh
participants (18)
-
Adam Bergmark
-
Carter Schonwald
-
Daniel Trstenjak
-
David Thomas
-
Erik Hesselink
-
Felipe Lessa
-
Ganesh Sittampalam
-
Gershom Bazerman
-
Greg Weber
-
Gregory Collins
-
Henning Thielemann
-
Herbert Valerio Riedel
-
Ivan Lazar Miljenovic
-
Johan Tibell
-
Michael Snoyman
-
MightyByte
-
Peter Simons
-
Simon Marechal