On Fri, Dec 7, 2012 at 10:20 AM, Edward Z. Yang <ezyang@mit.edu> wrote:
> Now this problem may be completely resolved in newer versions of
> cabal-install, but I think having a large number of packages in the global
> package database pegged at specific versions is a very strong recipe for
> reintroducing version hell. Coming back to yaml: it depends on conduit.
> Suppose after the HP is released, I release a new version of conduit. And
> suppose some other package (say, xml-conduit) depends on this newer
> version. What happens when the user tries to install a package that depends
> on the newer xml-conduit and yaml at the same time? Ideally I'd want them
> to get yaml recompiled against a newer version of conduit, but conflicting
> user and global databases can be a very sore point.

But this is exactly the point of the "Haskell Platform": given some set of
specific package versions, install them at the very beginning, and have all
other packages be built with those packages.  If you want to update one of
those packages, you are almost definitionally going to have to
rebuild--and this won't go away until Haskell libraries with no API
changes get binary compatibility.

Edward

I agree that you almost certainly will have to rebuild, my point is that the current setup strongly discourages this rebuilding.

Let me give a more extreme example: suppose that the bug in text hadn't simply been slower compile times, but instead it segfaulted every time you appended two empty strings. What would be our response to users? Should we tell them to stick with the current HP until the next one is released, and hope they don't trigger the bug? Should they try and convince their whole system to get rebuilt?

I think as the HP gets larger, we need to come up with an answer to the question of upgrading included packages.

Michael