
Why would a package developer want to upper bound the version number for packages like base? For example, the clash package requires base >= 4.2 && base <= 4.3 Consequently, it refuses to install with the latest ghc provided with the Haskell Platform (8.0.1). Does this mean that assuming that future versions of the platform will remain backwards compatible with prior versions is unsafe? Thanks, Dominick

On Mon, Jun 6, 2016 at 5:02 PM, Dominick Samperi
Consequently, it refuses to install with the latest ghc provided with the Haskell Platform (8.0.1).
base is not defined by the Platform, it is defined by (and ships with, and must completely match) ghc. And no, backward compatibility is not guaranteed; for a recent example, ghc 7.10 broke many programs by making Applicative a "superclass" of Monad and by generalizing many Prelude functions to Foldable and/or Traversable. -- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

The odd thing about this is that to upper bound a package that you did
not write (like base) you would have to know that incompatible changes
were coming in subsequent revisions, or that features of the API that
you rely on will be changed. The upper bound makes perfect sense for
packages that you are maintaining. Perhaps the answer to my original
question is that the maintainer of clash is also maintaining (part of)
base?
On Mon, Jun 6, 2016 at 5:13 PM, Brandon Allbery
On Mon, Jun 6, 2016 at 5:02 PM, Dominick Samperi
wrote: Consequently, it refuses to install with the latest ghc provided with the Haskell Platform (8.0.1).
base is not defined by the Platform, it is defined by (and ships with, and must completely match) ghc. And no, backward compatibility is not guaranteed; for a recent example, ghc 7.10 broke many programs by making Applicative a "superclass" of Monad and by generalizing many Prelude functions to Foldable and/or Traversable.
-- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

On Mon, Jun 6, 2016 at 8:19 PM, Dominick Samperi
The odd thing about this is that to upper bound a package that you did not write (like base) you would have to know that incompatible changes were coming in subsequent revisions, or that features of the API that you rely on will be changed.
There is a versioning policy covering this. It has been found to be necessary because otherwise people who try to build packages find themselves with broken messes because of the assumption that any future version of a package is guaranteed to be compatible. -- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

I guess what you are saying is that this policy will prevent packages
from installing with new versions of ghc until the maintainer has had
a chance to test the package with the new version, and has updated the
upper version limit. Thus, inserting those upper version limits is a
kind of flag that indicates that the package has been "certified" for
use with versions of base less than or equal to the upper limit.
On Mon, Jun 6, 2016 at 8:22 PM, Brandon Allbery
On Mon, Jun 6, 2016 at 8:19 PM, Dominick Samperi
wrote: The odd thing about this is that to upper bound a package that you did not write (like base) you would have to know that incompatible changes were coming in subsequent revisions, or that features of the API that you rely on will be changed.
There is a versioning policy covering this. It has been found to be necessary because otherwise people who try to build packages find themselves with broken messes because of the assumption that any future version of a package is guaranteed to be compatible.
-- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

Aforementioned versioning policy:
https://wiki.haskell.org/Package_versioning_policy
On Mon, Jun 6, 2016 at 5:58 PM, Dominick Samperi
I guess what you are saying is that this policy will prevent packages from installing with new versions of ghc until the maintainer has had a chance to test the package with the new version, and has updated the upper version limit. Thus, inserting those upper version limits is a kind of flag that indicates that the package has been "certified" for use with versions of base less than or equal to the upper limit.
On Mon, Jun 6, 2016 at 8:22 PM, Brandon Allbery
wrote: On Mon, Jun 6, 2016 at 8:19 PM, Dominick Samperi
wrote: The odd thing about this is that to upper bound a package that you did not write (like base) you would have to know that incompatible changes were coming in subsequent revisions, or that features of the API that you rely on will be changed.
There is a versioning policy covering this. It has been found to be necessary because otherwise people who try to build packages find themselves with broken messes because of the assumption that any future version of a package is guaranteed to be compatible.
-- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net
ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

On 2016-06-07 at 02:58:34 +0200, Dominick Samperi wrote:
I guess what you are saying is that this policy will prevent packages from installing with new versions of ghc until the maintainer has had a chance to test the package with the new version, and has updated the upper version limit. Thus, inserting those upper version limits is a kind of flag that indicates that the package has been "certified" for use with versions of base less than or equal to the upper limit.
That's one important aspect. I'm very distrustful of packages whose maintainers declares that their packages have eternal future compatibility (unless they have made this decision *very* carefully based on which parts of the API they use). In general, this runs into the fallacy that successful compilation would be equivalent to (semantic) API compatibility, which is only half the story. In some cases one may be lucky to get compile-time warnings (which are often ignored anyway), or explicit run-time errors (which are still undesirable), or even worse silent failures where the code behaves subtly wrong or different than expected. Testsuites mitigate this to some degree, but they too are an imperfect solution to this hard problem. So another aspect is that the PVP[1] provides an API contract which makes upper bounds possible at all (for packages you don't control). While the PVP doesn't give you a way to know for sure when compatibility breaks, the policy gives you a least upper bound up to which your package is guaranteed (under certain conditions) to remain compatible. Without this contract, you'd have no choice but to constraint your package to versions of dependencies (not under your control) which you were able to empirically certify to be compatible semantically, i.e. versions that were already published. Unfortunately, GHC's `base` package has a *huge* API surface. So with each GHC release we're usually forced to perform a major version bump to satisfy the PVP, even if just a tiny part only very few packages use of `base`'s API became backward in-compatible. This may be addressed by reducing the API surface of `base` by moving infrequently used GHC-internal-ish parts of the API out of base. But there's also other changes which affect many more packages as already mentioned. As already mentioned, the big AMP change was a major breaking point. Some packages like http://matrix.hackage.haskell.org/package/unordered-containers tend to break every time a new GHC version comes out. Partly because they happen to use the low-level parts of `base` API which tend to change. Ironically, in the case of `unordered-containers`, the maintainer decided to start leaving off (or rather make ineffective) the upper bound on `base` starting with 0.2.2.0, and this turned to be an error in judgment, as each time a new GHC version came out, the very bound that was left out would turned out to be necessary. So leaving off upper bounds created actually more work and no benefit in the case of `unordered-containers`. [1]: https://wiki.haskell.org/Package_versioning_policy

On Monday, June 6, 2016 23:15:00 -0700, Herbert Valerio Riedel wrote:
Unfortunately, GHC's `base` package has a *huge* API surface. So with> each GHC release we're usually forced to perform a major version bump to satisfy the PVP, even if just a tiny part only very few packages use of `base`'s API became backward in-compatible. This may be addressed by reducing the API surface of `base` by moving infrequently used GHC-internal-ish parts of the API out of base.
1. I hope the compartmenting of `base` will be done soon. The PVP's purpose is subverted by the frequent churning of `base` versions. 2. On /r/haskell I suggested that a package's dependencies be treated as metadata that can be maintained independently of the package itself. For example, if `foo-x.y.z.w` works with dependency `bar-a.b.c.d`, this would be included in `foo`'s .cabal file (the current practice). However, if a new version of `bar` is later found to be compatible, this fact would be stored in the external metadata (without having to update `foo-x.y.z.w`'s .cabal file), and cabal would look at this external metadata when calculating acceptable versions of `bar`. This would avoid *unnecessary* domino updates to packages in cases when their dependencies are updated in a compatible fashion. 3. In general, the granularity of package versions tracked by the PVP should be reconsidered. Ideally, each externally visible function should be tracked separately, which would allow using upward compatible versions of packages in many more cases than the PVP allows at present. Whether this finer granularity would be worth the added complexity should be given a fair trial. This tracking could be partially automated with a tool to compare the current package version with its update to identify formal changes. Semantic changes would still have to be noted manually. Howard

On Mon, Jun 6, 2016 at 11:15 PM, Herbert Valerio Riedel
or even worse silent failures where the code behaves subtly wrong or different than expected. Testsuites mitigate this to some degree, but they too are an imperfect solution to this hard problem.
It seems to me that if you have any thought at all for your library's clients the chances of this happening are pretty insignificant.

On June 9, 2016 10:43:00 -0700 David Fox wrote: > It seems to me that if you have any thought at all for your library's
clients the chances of this happening are pretty insignificant.
Sadly (IMO), this happens all too frequently. Upward compatibility suffers because most package authors are (naturally) interested in solving their own problems, and they don't get paid to think about others using their package who might be affected by an upward-incompatible change. This is a hard problem to solve unless we can find a way to pay package authors to take the extra time and effort to satisfy their users. Most open source communities are stricter about requiring upward compatibility than the Haskell community is.
Howard
________________________________
From: David Fox
subtly wrong or different than expected. Testsuites mitigate this to some degree, but they too are an imperfect solution to this hard problem.
It seems to me that if you have any thought at all for your library's clients the chances of this happening are pretty insignificant. _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

On 2016-06-09 at 19:43:42 +0200, David Fox wrote:
or even worse silent failures where the code behaves subtly wrong or different than expected. Testsuites mitigate this to some degree, but they too are an imperfect solution to this hard problem.
It seems to me that if you have any thought at all for your library's clients the chances of this happening are pretty insignificant.
This is a common argument, and requires for APIs to avoid changing the semantics of existing operations in the API in a non-backward compatible way. And instead of modifying existing APIs/operations if this can't be done, introduce new operations ( foo, fooV2, fooV3, ...), effectively versioning at the function-level. If we did this consequently, we wouldn't need the PVP to provide us a semantic contract, as upper bounds would only ever be needed/added if somebody broke that eternal compatibility contract. A variation would be to only allow to change the semantics of existing symbols if the type-signature changes in a significant way, and thereby indexing/versioning the semantics by type-signatures rather than a numeric API version. In both cases, we also could dispose of the PVP, as then we could use the API signature as the contract predicting API compatibility (c.f. Backpack) In the former case, we could get away with lower bounds only, and since *the raison d'être of the PVP is predicting upper version bounds*, again, there would be no reason to follow the PVP anymore. The PVP is there so I have the means to communicate semantic changes to my libraries' clients in the first place. So while I don't usually deliberately break the API for fun, when I do, I perform using a major version increment to communicate this to my clients. In other words, I promise to do my best not to break my library's API until the next major version bump, and to signal API additions via minor version increments. That's the gist of the PVP contract. In addition to version numbers for the cabal meta-data & solver, I typically also provide a Changelog for humans to read, which (at the very least) describes the reasons for minor & major version increments. If clients of my library choose to deliberately ignore the contract I'm promising to uphold to the best of my abilities (by e.g. leaving off upper bounds), then it's flat-out the client library's author fault that code breaks for disrespecting the PVP. Certainly not mine, as *I* did follow the rules.

Given that I'm the maintainer of the 'clash' package, I wanted to say that the 'clash' package has been deprecated in favour of the 'clash-ghc' package (for some time now, and this is stated on hackage). Sadly, 'clash-ghc' will not compile on ghc 8.0.1 right now either; it only compiles against ghc 7.10. I will update the installation instructions on the website and in the haddock documentation to mention this fact. A version of 'clash-ghc' that compiles against 8.0.1 is not due for another month. If you have any more questions about installing clash, I strongly encourage you to either email me, or the clash-mailing list (http://groups.google.com/group/clash-language), and not use this mailing list (ghc-devs@haskell.org) for question about clash. Regards, Christiaan On 06/06/2016 11:02 PM, Dominick Samperi wrote:
Why would a package developer want to upper bound the version number for packages like base? For example, the clash package requires
base >= 4.2 && base <= 4.3
Consequently, it refuses to install with the latest ghc provided with the Haskell Platform (8.0.1).
Does this mean that assuming that future versions of the platform will remain backwards compatible with prior versions is unsafe?
Thanks, Dominick _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

Others have already commented on many aspects of this discussion, but
I just wanted to mention that cabal has an '--allow-newer' flag to
disregard these constraints, so '--allow-newer=base' would allow you
to try and compile this package with GHC 8. Since GHC 8 is very recent
though and base 4.3 is very old, I imagine it won't work. In general I
think many packages haven't been updated for GHC 8 yet.
Erik
On 6 June 2016 at 23:02, Dominick Samperi
Why would a package developer want to upper bound the version number for packages like base? For example, the clash package requires
base >= 4.2 && base <= 4.3
Consequently, it refuses to install with the latest ghc provided with the Haskell Platform (8.0.1).
Does this mean that assuming that future versions of the platform will remain backwards compatible with prior versions is unsafe?
Thanks, Dominick _______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

On 7 Jun 2016, at 7:02 am, Dominick Samperi
wrote: Why would a package developer want to upper bound the version number for packages like base? For example, the clash package requires
base >= 4.2 && base <= 4.3
I put an upper bound on all my libraries as a proxy for the GHC version. Each time a new GHC version is released sometimes my libraries work with it and sometimes not. I remember a “burning bridges” event in recent history, when the definition of the Monad class changed and broke a lot of things. Suppose you maintain a library that is used by a lot of first year uni students (like gloss). Suppose the next GHC version comes around and your library hasn’t been updated yet because you’re waiting on some dependencies to get fixed before you can release your own. Do you want your students to get a “cannot install on this version” error, or some confusing build error which they don’t understand? The upgrade process could be automated with a buildbot: it would try new versions and automatically bump the upper bound if the regression tests worked, but someone would need to implement it.
Does this mean that assuming that future versions of the platform will remain backwards compatible with prior versions is unsafe?
My experience so far is that new GHC versions are “mostly” backwards compatible, but there are often small details that break library builds anyway. It only takes a day or so per year to fix, so I don’t mind too much. Ben.

On Tue, Jun 7, 2016 at 9:31 AM, Ben Lippmeier
On 7 Jun 2016, at 7:02 am, Dominick Samperi
wrote: Why would a package developer want to upper bound the version number for packages like base? For example, the clash package requires
base >= 4.2 && base <= 4.3
I put an upper bound on all my libraries as a proxy for the GHC version. Each time a new GHC version is released sometimes my libraries work with it and sometimes not. I remember a “burning bridges” event in recent history, when the definition of the Monad class changed and broke a lot of things.
Suppose you maintain a library that is used by a lot of first year uni students (like gloss). Suppose the next GHC version comes around and your library hasn’t been updated yet because you’re waiting on some dependencies to get fixed before you can release your own. Do you want your students to get a “cannot install on this version” error, or some confusing build error which they don’t understand?
This is a popular but ultimately silly argument. First, cabal dependency solver error messages are terrible; there's no way a new user would figure out from a bunch of solver output about things like "base-4.7.0.2" and "Dependency tree exhaustively searched" that the solution is to build with an older version of GHC. A configuration error and a build error will both send the same message: "something is broken". Second, this argument ignores the much more likely case that the package would have just worked with the new GHC, but the upper bound results in an unnecessary (and again, terrible) error message and a bad user experience. The best case is that the user somehow learns about --allow-newer=base, but cabal's error message doesn't even suggest trying this and it's still an unnecessary hoop to jump through. Experienced users are also only harmed by these upper bounds, since it's generally obvious when a program fails to build due to a change in base and the normal reaction to a version error with base is just to retry with --allow-newer=base anyways. Of course the best thing is to stick to the part of the language that is unlikely to be broken by future versions of base; sadly this seems to be impossible in the current climate... Regards, Reid Barton

On 8 Jun 2016, at 6:19 pm, Reid Barton
wrote: Suppose you maintain a library that is used by a lot of first year uni students (like gloss). Suppose the next GHC version comes around and your library hasn’t been updated yet because you’re waiting on some dependencies to get fixed before you can release your own. Do you want your students to get a “cannot install on this version” error, or some confusing build error which they don’t understand?
This is a popular but ultimately silly argument. First, cabal dependency solver error messages are terrible; there's no way a new user would figure out from a bunch of solver output about things like "base-4.7.0.2" and "Dependency tree exhaustively searched" that the solution is to build with an older version of GHC.
:-) At least “Dependency tree exhaustively searched” sounds like it’s not the maintainer’s problem. I prefer the complaints to say “can you please bump the bounds on this package” rather than “your package is broken”. Ben.

Right, part of the issue with having dependency solving at the core of your
workflow is that you never really know who's to blame. When running into
this circumstance, either:
1) Some maintainer made a mistake.
2) Some maintainer did not have perfect knowledge of the future and has not
yet updated some upper bounds. Or, upper bounds didn't get retroactively
bumped (usual).
3) You're asking cabal to do something that can't be done.
4) There's a bug in the solver.
So the only thing to do is to say "something went wrong". In a way it is
similar to type inference, it is difficult to give specific, concrete error
messages without making some arbitrary choices about which constraints have
gotten pushed around.
I think upper bounds could potentially be made viable by having both hard
and soft constraints. Until then, people are putting 2 meanings into one
thing. By having the distinction, I think cabal-install could provide much
better errors than it does currently. This has come up before, I'm not
sure what came of those discussions. My thoughts on how this would work:
* The dependency solver would prioritize hard constraints, and tell you
which soft constraints need to be lifted. I believe the solver even
already has this. Stack's integration with the solver will actually first
try to get a plan that doesn't override any snapshot versions, by
specifying them as hard constraints. If that doesn't work, it tries again
with soft constraints.
* "--allow-soft" or something would ignore soft constraints. Ideally this
would be selective on a per package / upper vs lower.
* It may be worth having the default be "--allow-soft" + be noisy about
which constraints got ignored. Then, you could have a "--pedantic-bounds"
flag that forces following soft bounds.
I could get behind upper bounds if they allowed maintainers to actually
communicate their intention, and if we had good automation for their
maintenance. As is, putting upper bounds on everything seems to cause more
problems than it solves.
-Michael
On Wed, Jun 8, 2016 at 1:31 AM, Ben Lippmeier
On 8 Jun 2016, at 6:19 pm, Reid Barton
wrote: Suppose you maintain a library that is used by a lot of first year uni
students (like gloss). Suppose the next GHC version comes around and your library hasn’t been updated yet because you’re waiting on some dependencies to get fixed before you can release your own. Do you want your students to get a “cannot install on this version” error, or some confusing build error which they don’t understand?
This is a popular but ultimately silly argument. First, cabal dependency solver error messages are terrible; there's no way a new user would figure out from a bunch of solver output about things like "base-4.7.0.2" and "Dependency tree exhaustively searched" that the solution is to build with an older version of GHC.
:-) At least “Dependency tree exhaustively searched” sounds like it’s not the maintainer’s problem. I prefer the complaints to say “can you please bump the bounds on this package” rather than “your package is broken”.
Ben.
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

What do you expect will be the distribution of 'soft' and 'hard' upper
bounds? In my experience, all upper bounds currently are 'soft' upper
bounds. They might become 'hard' upper bounds for a short while after
e.g. a GHC release, but in general, if a package maintainer knows that
a package fails to work with a certain version of a dependency, they
fix it.
So it seems to me that this is not so much a choice between 'soft' and
'hard' upper bounds, but a choice on what to do when you can't resolve
dependencies in the presence of the current (upper) bounds. Currently,
as you say, we give pretty bad error messages. The alternative you
propose (just try) currently often gives the same result in my
experience: bad error messages, in this case not from the solver, but
unintelligible compiler errors in an unknown package. So it seems the
solution might just be one of messaging: make the initial resolver
error much friendlier, and give a suggestion to use e.g.
--allow-newer=foo. The opposite might also be interesting to explore:
if installing a dependency (so not something you're developing or
explicitly asking for) fails to install and doesn't have an upper
bound, suggest something like --constaint=foo
Right, part of the issue with having dependency solving at the core of your workflow is that you never really know who's to blame. When running into this circumstance, either:
1) Some maintainer made a mistake. 2) Some maintainer did not have perfect knowledge of the future and has not yet updated some upper bounds. Or, upper bounds didn't get retroactively bumped (usual). 3) You're asking cabal to do something that can't be done. 4) There's a bug in the solver.
So the only thing to do is to say "something went wrong". In a way it is similar to type inference, it is difficult to give specific, concrete error messages without making some arbitrary choices about which constraints have gotten pushed around.
I think upper bounds could potentially be made viable by having both hard and soft constraints. Until then, people are putting 2 meanings into one thing. By having the distinction, I think cabal-install could provide much better errors than it does currently. This has come up before, I'm not sure what came of those discussions. My thoughts on how this would work:
* The dependency solver would prioritize hard constraints, and tell you which soft constraints need to be lifted. I believe the solver even already has this. Stack's integration with the solver will actually first try to get a plan that doesn't override any snapshot versions, by specifying them as hard constraints. If that doesn't work, it tries again with soft constraints.
* "--allow-soft" or something would ignore soft constraints. Ideally this would be selective on a per package / upper vs lower.
* It may be worth having the default be "--allow-soft" + be noisy about which constraints got ignored. Then, you could have a "--pedantic-bounds" flag that forces following soft bounds.
I could get behind upper bounds if they allowed maintainers to actually communicate their intention, and if we had good automation for their maintenance. As is, putting upper bounds on everything seems to cause more problems than it solves.
-Michael
On Wed, Jun 8, 2016 at 1:31 AM, Ben Lippmeier
wrote: On 8 Jun 2016, at 6:19 pm, Reid Barton
wrote: Suppose you maintain a library that is used by a lot of first year uni students (like gloss). Suppose the next GHC version comes around and your library hasn’t been updated yet because you’re waiting on some dependencies to get fixed before you can release your own. Do you want your students to get a “cannot install on this version” error, or some confusing build error which they don’t understand?
This is a popular but ultimately silly argument. First, cabal dependency solver error messages are terrible; there's no way a new user would figure out from a bunch of solver output about things like "base-4.7.0.2" and "Dependency tree exhaustively searched" that the solution is to build with an older version of GHC.
:-) At least “Dependency tree exhaustively searched” sounds like it’s not the maintainer’s problem. I prefer the complaints to say “can you please bump the bounds on this package” rather than “your package is broken”.
Ben.
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

I think "hard" upper bounds would come about in situations where a new
version of a dependency is released that breaks things in a package, so
until the breakage is fixed a hard upper bound is required. Likewise for
hard lower bounds.
And arguments about "it shouldn't happen with the PVP" don't hold, because
it does happen, PVP is a human judgement thing.
Alan
On Thu, Jun 9, 2016 at 10:01 AM, Erik Hesselink
What do you expect will be the distribution of 'soft' and 'hard' upper bounds? In my experience, all upper bounds currently are 'soft' upper bounds. They might become 'hard' upper bounds for a short while after e.g. a GHC release, but in general, if a package maintainer knows that a package fails to work with a certain version of a dependency, they fix it.
So it seems to me that this is not so much a choice between 'soft' and 'hard' upper bounds, but a choice on what to do when you can't resolve dependencies in the presence of the current (upper) bounds. Currently, as you say, we give pretty bad error messages. The alternative you propose (just try) currently often gives the same result in my experience: bad error messages, in this case not from the solver, but unintelligible compiler errors in an unknown package. So it seems the solution might just be one of messaging: make the initial resolver error much friendlier, and give a suggestion to use e.g. --allow-newer=foo. The opposite might also be interesting to explore: if installing a dependency (so not something you're developing or explicitly asking for) fails to install and doesn't have an upper bound, suggest something like --constaint=foo
Do you have different experiences regarding the number of 'hard' upper bounds that exist?
Regards,
Erik
Right, part of the issue with having dependency solving at the core of your workflow is that you never really know who's to blame. When running into this circumstance, either:
1) Some maintainer made a mistake. 2) Some maintainer did not have perfect knowledge of the future and has not yet updated some upper bounds. Or, upper bounds didn't get retroactively bumped (usual). 3) You're asking cabal to do something that can't be done. 4) There's a bug in the solver.
So the only thing to do is to say "something went wrong". In a way it is similar to type inference, it is difficult to give specific, concrete error messages without making some arbitrary choices about which constraints have gotten pushed around.
I think upper bounds could potentially be made viable by having both hard and soft constraints. Until then, people are putting 2 meanings into one thing. By having the distinction, I think cabal-install could provide much better errors than it does currently. This has come up before, I'm not sure what came of those discussions. My thoughts on how this would work:
* The dependency solver would prioritize hard constraints, and tell you which soft constraints need to be lifted. I believe the solver even already has this. Stack's integration with the solver will actually first try to get a plan that doesn't override any snapshot versions, by specifying
as hard constraints. If that doesn't work, it tries again with soft constraints.
* "--allow-soft" or something would ignore soft constraints. Ideally
would be selective on a per package / upper vs lower.
* It may be worth having the default be "--allow-soft" + be noisy about which constraints got ignored. Then, you could have a "--pedantic-bounds" flag that forces following soft bounds.
I could get behind upper bounds if they allowed maintainers to actually communicate their intention, and if we had good automation for their maintenance. As is, putting upper bounds on everything seems to cause more problems than it solves.
-Michael
On Wed, Jun 8, 2016 at 1:31 AM, Ben Lippmeier
wrote: On 8 Jun 2016, at 6:19 pm, Reid Barton
wrote: Suppose you maintain a library that is used by a lot of first year uni students (like gloss). Suppose the next GHC version comes around and
your
library hasn’t been updated yet because you’re waiting on some dependencies to get fixed before you can release your own. Do you want your students to get a “cannot install on this version” error, or some confusing build error which they don’t understand?
This is a popular but ultimately silly argument. First, cabal dependency solver error messages are terrible; there's no way a new user would
On 8 June 2016 at 22:01, Michael Sloan
wrote: them this figure out from a bunch of solver output about things like "base-4.7.0.2" and "Dependency tree exhaustively searched" that the solution is to build with an older version of GHC.
:-) At least “Dependency tree exhaustively searched” sounds like it’s not the maintainer’s problem. I prefer the complaints to say “can you please bump the bounds on this package” rather than “your package is broken”.
Ben.
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

Sure, I'm just wondering about how this plays out in reality: of
people getting unsolvable plans, how many are due to hard upper bounds
and how many due to soft upper bounds? We can't reliably tell, of
course, since we don't have this distinction currently, but I was
trying to get some anecdotal data to add to my own.
Erik
On 9 June 2016 at 10:07, Alan & Kim Zimmerman
I think "hard" upper bounds would come about in situations where a new version of a dependency is released that breaks things in a package, so until the breakage is fixed a hard upper bound is required. Likewise for hard lower bounds.
And arguments about "it shouldn't happen with the PVP" don't hold, because it does happen, PVP is a human judgement thing.
Alan
On Thu, Jun 9, 2016 at 10:01 AM, Erik Hesselink
wrote: What do you expect will be the distribution of 'soft' and 'hard' upper bounds? In my experience, all upper bounds currently are 'soft' upper bounds. They might become 'hard' upper bounds for a short while after e.g. a GHC release, but in general, if a package maintainer knows that a package fails to work with a certain version of a dependency, they fix it.
So it seems to me that this is not so much a choice between 'soft' and 'hard' upper bounds, but a choice on what to do when you can't resolve dependencies in the presence of the current (upper) bounds. Currently, as you say, we give pretty bad error messages. The alternative you propose (just try) currently often gives the same result in my experience: bad error messages, in this case not from the solver, but unintelligible compiler errors in an unknown package. So it seems the solution might just be one of messaging: make the initial resolver error much friendlier, and give a suggestion to use e.g. --allow-newer=foo. The opposite might also be interesting to explore: if installing a dependency (so not something you're developing or explicitly asking for) fails to install and doesn't have an upper bound, suggest something like --constaint=foo
Do you have different experiences regarding the number of 'hard' upper bounds that exist?
Regards,
Erik
On 8 June 2016 at 22:01, Michael Sloan
wrote: Right, part of the issue with having dependency solving at the core of your workflow is that you never really know who's to blame. When running into this circumstance, either:
1) Some maintainer made a mistake. 2) Some maintainer did not have perfect knowledge of the future and has not yet updated some upper bounds. Or, upper bounds didn't get retroactively bumped (usual). 3) You're asking cabal to do something that can't be done. 4) There's a bug in the solver.
So the only thing to do is to say "something went wrong". In a way it is similar to type inference, it is difficult to give specific, concrete error messages without making some arbitrary choices about which constraints have gotten pushed around.
I think upper bounds could potentially be made viable by having both hard and soft constraints. Until then, people are putting 2 meanings into one thing. By having the distinction, I think cabal-install could provide much better errors than it does currently. This has come up before, I'm not sure what came of those discussions. My thoughts on how this would work:
* The dependency solver would prioritize hard constraints, and tell you which soft constraints need to be lifted. I believe the solver even already has this. Stack's integration with the solver will actually first try to get a plan that doesn't override any snapshot versions, by specifying them as hard constraints. If that doesn't work, it tries again with soft constraints.
* "--allow-soft" or something would ignore soft constraints. Ideally this would be selective on a per package / upper vs lower.
* It may be worth having the default be "--allow-soft" + be noisy about which constraints got ignored. Then, you could have a "--pedantic-bounds" flag that forces following soft bounds.
I could get behind upper bounds if they allowed maintainers to actually communicate their intention, and if we had good automation for their maintenance. As is, putting upper bounds on everything seems to cause more problems than it solves.
-Michael
On Wed, Jun 8, 2016 at 1:31 AM, Ben Lippmeier
wrote: On 8 Jun 2016, at 6:19 pm, Reid Barton
wrote: Suppose you maintain a library that is used by a lot of first year uni students (like gloss). Suppose the next GHC version comes around and your library hasn’t been updated yet because you’re waiting on some dependencies to get fixed before you can release your own. Do you want your students to get a “cannot install on this version” error, or some confusing build error which they don’t understand?
This is a popular but ultimately silly argument. First, cabal dependency solver error messages are terrible; there's no way a new user would figure out from a bunch of solver output about things like "base-4.7.0.2" and "Dependency tree exhaustively searched" that the solution is to build with an older version of GHC.
:-) At least “Dependency tree exhaustively searched” sounds like it’s not the maintainer’s problem. I prefer the complaints to say “can you please bump the bounds on this package” rather than “your package is broken”.
Ben.
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
participants (12)
-
Alan & Kim Zimmerman
-
Andrew Farmer
-
Ben Lippmeier
-
Brandon Allbery
-
Christiaan Baaij
-
David Fox
-
Dominick Samperi
-
Erik Hesselink
-
Herbert Valerio Riedel
-
Howard B. Golden
-
Michael Sloan
-
Reid Barton