On Tue, Feb 25, 2014 at 12:38 PM, Michael Snoyman <michael@snoyman.com> wrote:
On Tue, Feb 25, 2014 at 9:23 PM, Gregory Collins <greg@gregorycollins.net> wrote:
Like Ed said, this is pretty cut and dried: we have a policy, you're choosing not to follow it, you're not in compliance, you're breaking stuff. We can have a discussion about changing the policy (and this has definitely been discussed to death before), but I don't think your side has the required consensus/votes needed to change the policy. As such, I really wish that you would reconsider your stance here.

I really don't like this appeal to authority. I don't know who the "royal we" is that you are referring to here, and I don't accept the premise that the rest of us must simply adhere to a policy because "it was decided." "My side" as you refer to it is giving concrete negative consequences to the PVP. I'd expect "your side" to respond in kind, not simply assert that we're "breaking Hackage" and other such hyperbole.

This is not an appeal to authority, it's an appeal to consensus. The community comes together to work on lots of different projects like Hackage and the platform and we have established procedures and policies (like the PVP and the Hackage platform process) to manage this. I think the following facts are uncontroversial:
  • a Hackage package versioning policy exists and has been published in a known location
  • we don't have another one
  • you're violating it
Now you're right to argue that the PVP as currently constituted causes problems, i.e. "I can't upgrade to new-shiny-2.0 quickly enough" and "I manage 200 packages and you're driving me insane". And new major base versions cause a month of churn before everything goes green again. Everyone understands this. But the solution is either to vote to change the policy or to write tooling to make your life less insane, not just to ignore it, because the situation this creates (programs bitrot and become unbuildable over time at 100% probability) is really disappointing.

Now, I think I understand what you're alluding to. Assuming I understand you correctly, I think you're advocating irresponsible development. I have codebases which I maintain and which use older versions of packages. I know others who do the same. The rule for this is simple: if your development process only works by assuming third parties to adhere to some rules you've established, you're in for a world of hurt. You're correct: if everyone rigidly followed the PVP, *and* no one every made any mistakes, *and* the PVP solved all concerns, then you could get away with the development practices you're talking about.

There's a strawman in there -- in an ideal world PVP violations would be rare and would be considered bugs. Also, if it were up to me we'd be machine-checking PVP compliance. I don't know what you're talking about re: "irresponsible development". In the scenario I'm talking about, my program depends on "foo-1.2", "foo-1.2" depends on any version of "bar", and then when "bar-2.0" is released "foo-1.2" stops building and there's no way to fix this besides trial and error because the solver doesn't have enough information to do its work (and it's been lied to!!!). The only practical solutions right now are to:
  • commit to maintaining every program you've ever written on the hackage upgrade treadmill forever, or
  • write down the exact versions of all of the libraries you need in the transitive closure of the dependency graph.
#2 is best practice for repeatable builds anyways and you're right that cabal freeze will help here, but it doesn't help much for all the programs written before "cabal freeze" comes out. 

But that's not the real world. In the real world:

* The PVP itself does *not* guarantee reliable builds in all cases. If a transitive dependency introduces new exports, or provides new typeclass instances, a fully PVP-compliant stack can be broken. (If anyone doubts this claim, let me know, I can spell out the details. This has come up in practice.)

Of course. But compute the probability of this occurring (rare) vs the probability of breakage given no upper bounds (100% as t -> ∞). Think about what you're saying semantically when you say you depend only on "foo > 3": "foo version 4.0 or any later version". You can't own up to this contract.

* Just because your code *builds*, doesn't mean your code *works*. Semantics can change: bugs can be introduced, bugs that you depended upon can be resolved, performance characteristics can change in breaking ways, etc.

I think you're making my point for me -- given that this paragraph you wrote is 100% correct, it makes sense for cabal not to try to build against the new version of a dependency until the package maintainer has checked that things still work and given the solver the go-ahead by bumping the package upper bound.

This is where we apparently fundamentally disagree. cabal freeze IMO is not at all a kludge. It's the only sane approach to reliable builds. If I ran my test suite against foo version 1.0.1, performed manual testing on 1.0.1, did my load balancing against 1.0.1, I don't want some hotfix build to automatically get upgraded to version 1.0.2, based on the assumption that foo's author didn't break anything.

This wouldn't be an assumption, Michael -- the tool should run the build and the test suites. We'd bump version on green tests.

G
--
Gregory Collins <greg@gregorycollins.net>