
Sure, I'm just wondering about how this plays out in reality: of
people getting unsolvable plans, how many are due to hard upper bounds
and how many due to soft upper bounds? We can't reliably tell, of
course, since we don't have this distinction currently, but I was
trying to get some anecdotal data to add to my own.
Erik
On 9 June 2016 at 10:07, Alan & Kim Zimmerman
I think "hard" upper bounds would come about in situations where a new version of a dependency is released that breaks things in a package, so until the breakage is fixed a hard upper bound is required. Likewise for hard lower bounds.
And arguments about "it shouldn't happen with the PVP" don't hold, because it does happen, PVP is a human judgement thing.
Alan
On Thu, Jun 9, 2016 at 10:01 AM, Erik Hesselink
wrote: What do you expect will be the distribution of 'soft' and 'hard' upper bounds? In my experience, all upper bounds currently are 'soft' upper bounds. They might become 'hard' upper bounds for a short while after e.g. a GHC release, but in general, if a package maintainer knows that a package fails to work with a certain version of a dependency, they fix it.
So it seems to me that this is not so much a choice between 'soft' and 'hard' upper bounds, but a choice on what to do when you can't resolve dependencies in the presence of the current (upper) bounds. Currently, as you say, we give pretty bad error messages. The alternative you propose (just try) currently often gives the same result in my experience: bad error messages, in this case not from the solver, but unintelligible compiler errors in an unknown package. So it seems the solution might just be one of messaging: make the initial resolver error much friendlier, and give a suggestion to use e.g. --allow-newer=foo. The opposite might also be interesting to explore: if installing a dependency (so not something you're developing or explicitly asking for) fails to install and doesn't have an upper bound, suggest something like --constaint=foo
Do you have different experiences regarding the number of 'hard' upper bounds that exist?
Regards,
Erik
On 8 June 2016 at 22:01, Michael Sloan
wrote: Right, part of the issue with having dependency solving at the core of your workflow is that you never really know who's to blame. When running into this circumstance, either:
1) Some maintainer made a mistake. 2) Some maintainer did not have perfect knowledge of the future and has not yet updated some upper bounds. Or, upper bounds didn't get retroactively bumped (usual). 3) You're asking cabal to do something that can't be done. 4) There's a bug in the solver.
So the only thing to do is to say "something went wrong". In a way it is similar to type inference, it is difficult to give specific, concrete error messages without making some arbitrary choices about which constraints have gotten pushed around.
I think upper bounds could potentially be made viable by having both hard and soft constraints. Until then, people are putting 2 meanings into one thing. By having the distinction, I think cabal-install could provide much better errors than it does currently. This has come up before, I'm not sure what came of those discussions. My thoughts on how this would work:
* The dependency solver would prioritize hard constraints, and tell you which soft constraints need to be lifted. I believe the solver even already has this. Stack's integration with the solver will actually first try to get a plan that doesn't override any snapshot versions, by specifying them as hard constraints. If that doesn't work, it tries again with soft constraints.
* "--allow-soft" or something would ignore soft constraints. Ideally this would be selective on a per package / upper vs lower.
* It may be worth having the default be "--allow-soft" + be noisy about which constraints got ignored. Then, you could have a "--pedantic-bounds" flag that forces following soft bounds.
I could get behind upper bounds if they allowed maintainers to actually communicate their intention, and if we had good automation for their maintenance. As is, putting upper bounds on everything seems to cause more problems than it solves.
-Michael
On Wed, Jun 8, 2016 at 1:31 AM, Ben Lippmeier
wrote: On 8 Jun 2016, at 6:19 pm, Reid Barton
wrote: Suppose you maintain a library that is used by a lot of first year uni students (like gloss). Suppose the next GHC version comes around and your library hasn’t been updated yet because you’re waiting on some dependencies to get fixed before you can release your own. Do you want your students to get a “cannot install on this version” error, or some confusing build error which they don’t understand?
This is a popular but ultimately silly argument. First, cabal dependency solver error messages are terrible; there's no way a new user would figure out from a bunch of solver output about things like "base-4.7.0.2" and "Dependency tree exhaustively searched" that the solution is to build with an older version of GHC.
:-) At least “Dependency tree exhaustively searched” sounds like it’s not the maintainer’s problem. I prefer the complaints to say “can you please bump the bounds on this package” rather than “your package is broken”.
Ben.
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs