A new cabal odissey: cabal-1.8 breaking its own neck by updating its dependencies

Hi, after Andrew Coppin's odissey [1], I also had a few problem with cabal, which stopped installing new packages. Details will follow; meanwhile, here are some questions prompted by what happened: - when recompiling a package with ABI changes, does cabal always update dependent packages? It seems "not always" - it didn't update itself, nor refuse the breaking upgrade, and the ABI breakage caused linker errors on Setup.hs. Luckily, cabal was already linked so I could always use it for queries. - can cabal extract and check dependencies as specified in the .hi files? I had a broken dependency which I could see with "grep" but not otherwise. - is there a "specification" of which are the "core" packages? Not sure if this is more useful as a bug report or as discussion, or if I just misused cabal and it's the only perfect software in the world; still I wanted to share my experience. Also I am not sure if this is the best place - but I'm not subscribed to other lists, and the previous "cabal odissey" was posted here, so I hope it's fine. A related idea is that Gentoo had a tool which not only extracted such dependencies, but recompiled all affected packages. While package removal is not supported through cabal, it is sometimes needed (and well, it should be supported at some point). See for instance this FAQ: http://www.haskell.org/cabal/FAQ.html#dependencies-conflict My problem seems a nastier variation of the one described there :-(. Details follow. Best regards [1] http://groups.google.com/group/haskell-cafe/browse_thread/thread/787c67b31fa... -- Paolo Giarrusso - Ph.D. Student http://www.informatik.uni-marburg.de/~pgiarrusso/

- when recompiling a package with ABI changes, does cabal always update dependent packages?
If Foo depends on Bar and there is a new version of Foo that specifies a newer version of Bar then yes, the newer library being depended on will be build too (out of necessity). OTOH, if you are building a new version of a package on which others depend it won't build all the others. Ex: build a new "containers" package doesn't cause any of the ~1400 packages depending on it to be rebuilt.
It seems "not always" - it didn't update itself, nor refuse the breaking upgrade,
I don't really know what "it" is. Something to do with recompiling Cabal and cabal-install I take it, but I'll refrain from comment seeing as I don't understand what you're doing.
- is there a "specification" of which are the "core" packages?
Are there packages on which the community standardizes? That's the goal of Haskell-Platform [1], but I don't place any special value in a package being in HP yet - I just work with whatever package on Hackage fills my need and am under the impression this is most peoples mode of operation.
While package removal is not supported through cabal, it is sometimes needed
Why? What I see is a need for users to understand ghc-pkg (or whatever package management tool exists for their Haskell compiler). Should "cabal uninstall" provide a unified interface to some number of Haskell compiler packaging systems? It could but doesn't seem like a priority. Cheers, Thomas [1] http://hackage.haskell.org/platform/

On Sat, Sep 11, 2010 at 11:56 AM, Thomas DuBuisson
- is there a "specification" of which are the "core" packages?
Are there packages on which the community standardizes? That's the goal of Haskell-Platform [1], but I don't place any special value in a package being in HP yet - I just work with whatever package on Hackage fills my need and am under the impression this is most peoples mode of operation.
From the FAQ linked by Paolo:
http://www.haskell.org/cabal/FAQ.html#dependencies-conflict "To avoid this problem in the future, avoid upgrading core packages. The latest version of cabal-install has disabled the upgrade command to make it a bit harder for people to break their systems in this way." I think that's what Paolo meant by "core" package. Sadly the FAQ doesn't say what core means. Nor is that page user editable. I think "core" here must refer to packages that ghc is linked to. For example, the process package in the example on the FAQ. I actually had this problem last weekend and I make a habit of never running 'cabal upgrade' and never installing things globally. Yet some how on my system a package that ghc was built with did get upgraded and installed in my user package db. It was causing various things to fail to configure. If I recall correctly it was the directory package, but I'll use FOO as a place holder. I used the suggested command line: ghc-pkg unregister --user FOO-X ghc-pkg said it was ignoring the command because it would break packages. So then I tried adding --force. At that point, ghc-pkg still said it was ignoring me and that I should use --force. This was on ghc-6.12.1. I tried it one more time with the --force option then ran ghc-pkg list FOO, and all instances of the FOO package were gone. At that point I could no longer configure any packages needing FOO. In the end I had to reinstall ghc so I took it as a chance to upgrade to 6.12.3. Jason

On Sat, Sep 11, 2010 at 21:17, Jason Dagit
On Sat, Sep 11, 2010 at 11:56 AM, Thomas DuBuisson
wrote: - is there a "specification" of which are the "core" packages?
From the FAQ linked by Paolo:
http://www.haskell.org/cabal/FAQ.html#dependencies-conflict
"To avoid this problem in the future, avoid upgrading core packages. The latest version of cabal-install has disabled the upgrade command to make it a bit harder for people to break their systems in this way."
I think that's what Paolo meant by "core" package. Sadly the FAQ doesn't say what core means. Nor is that page user editable. I think "core" here must refer to packages that ghc is linked to. For example, the process package in the example on the FAQ.
You are mostly correct, thanks for understanding. As said my case, "core package" extends to packages cabal is linked to. Still, cabal should know and protect them. "core package" is also used in the output of "cabal upgrade" (which in my release, 0.8.2 with Cabal 1.8.0.6, is disabled): Below is the list of packages that it would have tried to upgrade. You can use the 'install' command to install the ones you want. Note that it is generally not recommended to upgrade core packages. It is noteworthy that "cabal install --help" has no such warning. Anyway, package manager Gentoo and *BSDs can upgrade everything while the system is running. It would be nice to do the same here, including "cabal install ghc", but "if you want, you can fix it" would be an appropriate response - I kind-of know I won't get to do that, likely, at least not while starting my PhD.
I actually had this problem last weekend and I make a habit of never running 'cabal upgrade' and never installing things globally. Yet some how on my system a package that ghc was built with did get upgraded and installed in my user package db. It was causing various things to fail to configure. If I recall correctly it was the directory package, but I'll use FOO as a place holder. I used the suggested command line: ghc-pkg unregister --user FOO-X
ghc-pkg said it was ignoring the command because it would break packages. So then I tried adding --force. At that point, ghc-pkg still said it was ignoring me and that I should use --force. This was on ghc-6.12.1. I tried it one more time with the --force option then ran ghc-pkg list FOO, and all instances of the FOO package were gone. At that point I could no longer configure any packages needing FOO. In the end I had to reinstall ghc so I took it as a chance to upgrade to 6.12.3.
Hmm... how comes that your global DB was user-writable? Luckily, it's owned by root here. Anyway, I consider most of the debugging I did quite challenging for non-Unix-guys, and reinstalling from scratch would have been surely faster. -- Paolo Giarrusso - Ph.D. Student http://www.informatik.uni-marburg.de/~pgiarrusso/

A further pitfall I just discovered: $ cabal install --dry cabal-install leksah-0.8.0.6 Resolving dependencies... cabal: cannot configure cabal-install-0.8.2. It requires Cabal ==1.8.* For the dependency on Cabal ==1.8.* there are these packages: Cabal-1.8.0.2, Cabal-1.8.0.4 and Cabal-1.8.0.6. However none of them are available. Cabal-1.8.0.2 was excluded because Cabal-1.6.0.3 was selected instead Cabal-1.8.0.2 was excluded because ghc-6.10.4 requires Cabal ==1.6.0.3 Cabal-1.8.0.4 was excluded because Cabal-1.6.0.3 was selected instead Cabal-1.8.0.4 was excluded because ghc-6.10.4 requires Cabal ==1.6.0.3 Cabal-1.8.0.6 was excluded because Cabal-1.6.0.3 was selected instead Cabal-1.8.0.6 was excluded because ghc-6.10.4 requires Cabal ==1.6.0.3 That's on ghc-6.10.4 with Cabal-1.8.0.6. However, trying to install cabal-install and leksah separately works quite well. Indeed, they are already installed, but since they are not tracked by ghc-pkg, cabal forgot that it installed them, and keeps forgetting that (as I just discovered). I believe the pitfall is that since cabal is trying to install both packages at once, it is trying to figure out dependencies for them together. The behavior might even be perfectly valid if there are interdependencies between the two packages, but that's not the case here; maybe cabal should not try to detect that the packages are not related and can be installed separately. Or maybe, it should just allow using two different versions of the same package, for packages which are not linked together. -- Paolo Giarrusso - Ph.D. Student http://www.informatik.uni-marburg.de/~pgiarrusso/

Hi Paolo, Paolo Giarrusso wrote:
$ cabal install --dry cabal-install leksah-0.8.0.6 [... does not work ...]
However, trying to install cabal-install and leksah separately works quite well.
So do install them separately. cabal install p1 p2 is supposed to find a single consistent install plan for p1 and p2 and the transitive dependencies of either of them. This is useful if you plan to use p1 and p2 in a single project. Tillmann

On Sun, Sep 12, 2010 at 15:30, Tillmann Rendel
Paolo Giarrusso wrote:
$ cabal install --dry cabal-install leksah-0.8.0.6 [... does not work ...] However, trying to install cabal-install and leksah separately works quite well.
So do install them separately.
Yeah, I did, I was pointing out the behavior because it _looked_ like a bug. And while it's a feature, it is there to cater with another "bug" (see below). Indeed, nothing in this thread is an assistance request.
cabal install p1 p2 is supposed to find a single consistent install plan for p1 and p2 and the transitive dependencies of either of them. This is useful if you plan to use p1 and p2 in a single project.
Ahah! Then it's a feature. The need for consistency stems from a bug: in a tracker entry you linked to, http://hackage.haskell.org/trac/hackage/ticket/704, duncan argues that "we also want to be able to do things like linking multiple versions of a Haskell package into a single application". If that were possible, cabal would solve my request by using Cabal 1.6 and 1.8 together - you can make that work if type-checking uses _versioned_ types (that's not discussed in bug #704 though). I believe, though, cabal should still try to avoid that unless needed or explicitly requested. Among other reasons, even after typechecking, Cabal 1.6 and 1.8 might interact differently with RealWorld, say through incompatible file formats. In that case, I would refrain from installing both, or Cabal 1.8 would have some imaginary "Conflicts: Cabal-1.6" property (which exists for Debian packages). But I see that here, the only correct install plan implies a GHC upgrade via Cabal and Hackage, which should not happen without a warning, and should never be attempted until all fundamental problems we are discussing are solved. -- Paolo Giarrusso - Ph.D. Student http://www.informatik.uni-marburg.de/~pgiarrusso/

Hi Paolo, Paolo Giarrusso wrote:
cabal install p1 p2 is supposed to find a single consistent install plan for p1 and p2 and the transitive dependencies of either of them. This is useful if you plan to use p1 and p2 in a single project.
Ahah! Then it's a feature. The need for consistency stems from a bug: in a tracker entry you linked to, http://hackage.haskell.org/trac/hackage/ticket/704, duncan argues that "we also want to be able to do things like linking multiple versions of a Haskell package into a single application".
I think this is a slightly different matter. Consider a package pair, which defines an abstract datatype of pairs in version 1: module Pair (Pair, fst, snd, pair) where data Pair a b = Pair a b fst (Pair a b) = a snd (Pair a b) = b pair a b = Pair a b In version 2 of pair, the internal representation of the datatype is changed: module Pair (Pair, fst, snd, pair) where data Pair a b = Pair b a fst (Pair b a) = a snd (Pair b a) = b pair a b = Pair b a Now we have a package foo which depends on pair-1: module Foo (foo) where import Pair foo = pair 42 '?' And a package bar which depends on pair-2: module Bar (bar) where import Pair bar = fst Now, we write a program which uses both foo and bar: module Program where import Foo import Bar main = print $ bar $ foo Even with the technical ability to link all of foo, bar, pair-1 and pair-2 together, I don't see how this program could be reasonably compiled. Therefore, I think that the notion of consistent install plans is relevant semantically, not just to work around some deficiency in the linking system. Tillmann

On Sun, Sep 12, 2010 at 20:46, Tillmann Rendel
Paolo Giarrusso wrote:
in a tracker entry you linked to, http://hackage.haskell.org/trac/hackage/ticket/704, duncan argues that "we also want to be able to do things like linking multiple versions of a Haskell package into a single application". [snip] Even with the technical ability to link all of foo, bar, pair-1 and pair-2 together, I don't see how this program could be reasonably compiled. Therefore, I think that the notion of consistent install plans is relevant semantically, not just to work around some deficiency in the linking system.
Your case is valid, but OTOH there other cases to support: if I link together two programs which use _internally_ different versions of regex packages, cabal should support that - and here I guess we agree. The issue is how to express or recognise the distinction. I had this kind of scenario in mind, and that's why I proposed using versioned typenames for typechecking - your example program would then be caught as ill-typed. However, that's not enough, because the correct solution is to use the same pair version. - OTOH, Program would probably have its own cabal file, which could maybe list a dependency on pair. But I don't like this solution - the developer shouldn't have to do this. - The nicer alternative would be to extract, from the types used in the .hi files, whether they mention pair at all - like here, and unlike the case when the different packages are used internally. This solution is perfect but takes extra work which I can't estimate. Actually, some more work would maybe be needed to cope with cross-module inlining, but I believe that this can be done by cabal, by just "looking at" .hi files, without further changes to GHC - after versioned typechecking is introduced if missing, anyway. And maybe some interface to .hi files should be exposed. -- Paolo Giarrusso - Ph.D. Student http://www.informatik.uni-marburg.de/~pgiarrusso/

On 13 September 2010 20:54, Paolo Giarrusso
On Sun, Sep 12, 2010 at 20:46, Tillmann Rendel
wrote: Paolo Giarrusso wrote:
in a tracker entry you linked to, http://hackage.haskell.org/trac/hackage/ticket/704, duncan argues that "we also want to be able to do things like linking multiple versions of a Haskell package into a single application". [snip] Even with the technical ability to link all of foo, bar, pair-1 and pair-2 together, I don't see how this program could be reasonably compiled. Therefore, I think that the notion of consistent install plans is relevant semantically, not just to work around some deficiency in the linking system.
Your case is valid, but OTOH there other cases to support: if I link together two programs which use _internally_ different versions of regex packages, cabal should support that - and here I guess we agree.
Paolo, If I've understood correctly, in this series of emails you're pointing out two problems: 1. upgrading packages can break dependencies (and Cabal does not do a lot to prevent/avoid this) 2. cabal ought to allow using multiple versions of a single package in more circumstances than it does now Both of these issues are known to the Cabal hackers (i.e. me and a few other people). I'll share my views on the problem and the solution. 1. This is certainly a problem. The current situation is not awful but it is a bit annoying sometimes. We do now accurately track when packages get broken by upgrading dependencies so it should not be possible to get segfaults by linking incompatible ABIs. My preferred solution is to follow the example of Nix and use a persistent package store. Then installing new packages (which includes what people think of as upgrading) become non-destructive operations: no existing packages would be broken by an upgrade. It would be necessary to allow installing multiple instances of the same version of a package. If we do not allow multiple instances of a package then breaking things during an upgrade will always remain a possibility. We could work harder to avoid breaking things, or to try rebuilding things that would become broken but it could never be a 100% solution. 2. This is a problem of information and optimisitic or pesimistic assumptions. Technically there is no problem with typechecking or linking in the presense of multiple versions of a package. If we have a type Foo from package foo-1.0 then that is a different type to Foo from package foo-1.1. GHC knows this. So if for example a package uses regex or QC privately then other parts of the same program (e.g. different libs) can also use different versions of the same packages. There are other examples of course where types from some common package get used in interfaces (e.g. ByteString or Text). In these cases it is essential that the same version of the package be used on both sides of the interface otherwise we will get a type error because text-0.7:Data.Text.Text does not unify with text-0.8:Data.Text.Text. The problem for the package manager (i.e. cabal) is knowing which of the two above scenarios apply for each dependency and thus whether multiple versions of that dependency should be allowed or not. Currently cabal does not have any information whatsoever to make that distinction so we have to make the conservative assumption. If for example we knew that particular dependencies were "private" dependencies then we would have enough information to do a better job in very many of the common examples. My preference here is for adding a new field, build-depends-private (or some such similar name) and to encourage packages to distinguish between their public/visible dependencies and their private/invisible deps. Duncan

Hi Duncan,
first, thanks for coming yourself to answer.
On Wed, Sep 15, 2010 at 18:33, Duncan Coutts
On 13 September 2010 20:54, Paolo Giarrusso
wrote: On Sun, Sep 12, 2010 at 20:46, Tillmann Rendel
wrote:
1. upgrading packages can break dependencies (and Cabal does not do a lot to prevent/avoid this)
2. cabal ought to allow using multiple versions of a single package in more circumstances than it does now
I answer below with some issues - in particular, I discuss why IMHO your proposal for 2. does not work well with cross-module inlining.
Both of these issues are known to the Cabal hackers (i.e. me and a few other people). I'll share my views on the problem and the solution.
Ah-ah! Can I request to add _at least_ the 1st among FAQs? Something like: "A version of package A was rebuilt [for an upgrade of its dependency], and stuff depending on A started causing linking errors!" I am even ready to send patches.
1. This is certainly a problem. The current situation is not awful but it is a bit annoying sometimes. We do now accurately track when packages get broken by upgrading dependencies so it should not be possible to get segfaults by linking incompatible ABIs.
I had a slightly different counterexample, but maybe it's purely a GHC bug; I use GHC 6.10.4 and the latest Cabal/cabal-install. Are dependencies computed by Cabal or ghc-pkg? If they are computed by Cabal, I think I have a bug report. At some point I unregistered a package with ghc-pkg (old-locale-1.0.0.2 probably), without using --force, and I started getting linker errors mentioning it, in a form like: <command line>: unknown package: old-locale-1.0.0.2 even if old-locale-1.0.0.2 appeared on no command line (not even internal ones, I checked everything with -v), but was just mentioned by a package mentioned on the command line of an internal command. There is a small possibility that this was due to the older Cabal which was installed with GHC - but IIRC the new cabal was one of the first packages (or the first) I installed.
My preferred solution is to follow the example of Nix and use a persistent package store. Then installing new packages (which includes what people think of as upgrading) become non-destructive operations: no existing packages would be broken by an upgrade.
It would be necessary to allow installing multiple instances of the same version of a package. That would solve Cabal bug 738 which I reported.
If we do not allow multiple instances of a package then breaking things during an upgrade will always remain a possibility. We could work harder to avoid breaking things, or to try rebuilding things that would become broken but it could never be a 100% solution.
It is a good idea, but how do you handle removal requests? Also, there are existing complete solutions, they are much harder to get right. However, multiple versions of the same package is a good idea, and in particular it would make upgrading Cabal much less tricky. The problem with package removal is still present, but that is less important than safety (especially given that "cabal uninstall" is still a TODO); and in a safe persistent system, one can use ghc-pkg unregister and manually handle the dependencies. And I'd like to point out that a non-persistent package store can be made to 100% work - with your proposal it would do so by design.
2. This is a problem of information and optimisitic or pesimistic assumptions. Technically there is no problem with typechecking or linking in the presense of multiple versions of a package. If we have a type Foo from package foo-1.0 then that is a different type to Foo from package foo-1.1. GHC knows this.
So if for example a package uses regex or QC privately then other parts of the same program (e.g. different libs) can also use different versions of the same packages. There are other examples of course where types from some common package get used in interfaces (e.g. ByteString or Text). In these cases it is essential that the same version of the package be used on both sides of the interface otherwise we will get a type error because text-0.7:Data.Text.Text does not unify with text-0.8:Data.Text.Text.
The problem for the package manager (i.e. cabal) is knowing which of the two above scenarios apply for each dependency and thus whether multiple versions of that dependency should be allowed or not. Currently cabal does not have any information whatsoever to make that distinction so we have to make the conservative assumption. If for example we knew that particular dependencies were "private" dependencies then we would have enough information to do a better job in very many of the common examples.
My preference here is for adding a new field, build-depends-private (or some such similar name) and to encourage packages to distinguish between their public/visible dependencies and their private/invisible deps.
On a policy level, it's difficult for a developer to keep track of which dependencies are public and private. You need to manually inspect your public API. On a mechanism level, I think that adding a field actually doesn't work, because GHC cross-module inlining can change the picture unpredictably: cabal would need to check that packages in build-depends-private are not mentioned in the .hi interface files - but GHC can store there implementation details. Results: if cabal does no checking, a packager can easily shoot the foot of its users (rather than its own). If cabal does such checking, getting it right requires trial-and-error for the developer, and it will cause errors when the GHC version and optimization options change. We don't want either scenario. E.g. I just made up a syntax for a regexp library, and built a function which should check if (useless) trailing spaces are present in some text: checkNoTrailingSpace:: String -> String checkNoTrailingSpace = not . (regexpMatch "\s+$") allowing inlining of such a function would turn a possibly private dependency on some regexp package into a public one. However, automatic checking as I proposed (without extra help from GHC) does not work either, and I show a counterexample, which is also about "bad library design". Suppose that V1 of package Foo has functions: buildFoo bar baz = (bar, baz) takeBar (bar,baz) = bar and that V2 of Foo swaps the order of bar and baz in the underlying pair. The pair representation should be either encapsulated by a data constructor (but it is not), or part of the API and ABI and thus not changeable. Today doing this causes no harm, but if linking multiple versions of Foo were allowed, this would create a nightmare. If a data constructor where used, versioned typechecking would catch the problem. Since these functions could be fully inlined in module Foo2, it becomes impossible to infer from .hi files of Foo2 which dependencies are public, unless GHC stores from which modules come bodies of functions exposed in .hi files. So, I propose to: - depending on the solution, possibly educate library developers about resulting pitfalls, if they are not supposed to write code like the above. - extend GHC to produce needed information (if not done) - use that for automatically checking which dependencies are public and which are private (at package installation time) Best regards -- Paolo Giarrusso - Ph.D. Student http://www.informatik.uni-marburg.de/~pgiarrusso/

On the topic of cabal odisseys: I think it would help to document (prominently) what Cabal fundamentally doesn't (try to) do, to avoid optimistic expectations (and hence the surprises when Cabal doesn't meet those expectations), and to point out the design choices behind many bug tickets (even if the choices are temporary and driven by limited manpower). Such as - cabal doesn't keep track of what it installs, hence - no uninstall - no application tracking - library tracking and information about installed library configurations only via ghc-pkg .. - cabal's view of package interfaces is limited to explicitly provided package names and version numbers, hence - no guarantee that interface changes are reflected in version number differences - no detailed view of interfaces (types, functions, ..) - no reflection of build/configure options in versions - no reflection of dependency versions/configurations in dependent versions - no knowledge of whether dependencies get exposed in dependent package interfaces .. This is just to demonstrate the kind of information I'd like to see - Duncan&co know the real list, though they don't seem to have it spelled out anywhere? So users see that it works to say "cabal install" and build up their own often optimistic picture of what (current) Cabal is supposed to be able to do. It might even be useful to have a category of "core-tickets" or such on the trac, to identify these as work-package opportunities for hackathons, GSoC and the like, positively affecting many tickets at once. On to the specific issue at hand:
2. cabal ought to allow using multiple versions of a single package in more circumstances than it does now .. 2. This is a problem of information and optimisitic or pesimistic assumptions. Technically there is no problem with typechecking or linking in the presense of multiple versions of a package. If we have a type Foo from package foo-1.0 then that is a different type to Foo from package foo-1.1. GHC knows this.
So if for example a package uses regex or QC privately then other parts of the same program (e.g. different libs) can also use different versions of the same packages. There are other examples of course where types from some common package get used in interfaces (e.g. ByteString or Text). In these cases it is essential that the same version of the package be used on both sides of the interface otherwise we will get a type error because text-0.7:Data.Text.Text does not unify with text-0.8:Data.Text.Text.
The problem for the package manager (i.e. cabal) is knowing which of the two above scenarios apply for each dependency and thus whether multiple versions of that dependency should be allowed or not. Currently cabal does not have any information whatsoever to make that distinction so we have to make the conservative assumption. If for example we knew that particular dependencies were "private" dependencies then we would have enough information to do a better job in very many of the common examples.
My preference here is for adding a new field, build-depends-private (or some such similar name) and to encourage packages to distinguish between their public/visible dependencies and their private/invisible deps.
This private/public distinction should be inferred, or at least be checked, not just stated. I don't know how GHC computes its ABI hashes - are they monolithic, or modular (so that the influence of dependencies, current package, current compiler, current option settings, could be extracted)? Even for monolithic ABI hashes, it might be possible to compute the package ABI hash a second time, setting the versions of all dependencies declared to be private to 0.0.0 and seeing if that makes a difference (if it does, the supposedly private dependency leaks into the package ABI, right?). And secondly, how about making it possible for cabal files to express sharing constraints for versions of dependencies? To begin with, there is currently no way to distinguish packages with different flag settings or dependency versions. If I write a package extended-base, adding the all-important functions car and cdr to an otherwise unchanged Data.List, that package will work with just about any version of base (until car and cdr appear in base:-), but the resulting packages may not be compatible, in spite of identical version numbers. If it was possible to refer to those package parameters (build options and dependency versions), one could then add constraints specifying which package parameters ought to match for a build configuration to be acceptable. Let us annotate package identifiers with their dependencies where the current lack of dependency and sharing annotations means "I don't care how this was built". Then build-depends: a, regex means "I need a and regex, but I don't care whether a also uses some version of regex", while build-depends: a, regex sharing: a(regex)==regex would mean "I need any version of a and regex, as long as a depends on same the version of regex I depend on" (choose any syntax that works). And build-depends: a, b sharing: a(text)==b(text) would mean "I need any version of a and b, as long as they both depend on the same version of text". I can already think of situations where one might want more complex constraints (e.g. "I want any version of base and mtl, but either both have to be before the Either-move, or both have to be after the move" - this part could be expressed already, but there is currently no way to enforce this transitively, for dependencies, so if I depend on c, which depends on a new base, and on d, which depends on an old mtl, I'm out of luck, I think). Or: "never mix mtl and its alternatives". Of course, this scheme depends on whether it is possible to reverse engineer the information about dependencies of installed packages from ghc-pkg info or ABI hashes.. but if the general idea is sound, perhaps that could be made possible? Just a suggestion, Claus

On Sat, Sep 11, 2010 at 12:17:27PM -0700, Jason Dagit wrote:
From the FAQ linked by Paolo:
http://www.haskell.org/cabal/FAQ.html#dependencies-conflict
"To avoid this problem in the future, avoid upgrading core packages. The latest version of cabal-install has disabled the upgrade command to make it a bit harder for people to break their systems in this way."
It's not always possible. In particular, random-1.0.0.2 (shipped with GHC 6.12.*) depends on the time package, of which more recent versions have been released. That can trigger rebuilding of random-1.0.0.2, and thus haskell98-1.0.1.1. It might help if the release of random with GHC 7.0 had a tight dependency on the version of the time package shipped with it. Maybe all the core packages need tight dependencies.

On Sep 13, 1:52 pm, Ross Paterson
On Sat, Sep 11, 2010 at 12:17:27PM -0700, Jason Dagit wrote:
"To avoid this problem in the future, avoid upgrading core packages. The latest version of cabal-install has disabled the upgrade command to make it a bit harder for people to break their systems in this way."
It's not always possible. In particular, random-1.0.0.2 (shipped with GHC 6.12.*) depends on the time package, of which more recent versions have been released. That can trigger rebuilding of random-1.0.0.2, and thus haskell98-1.0.1.1.
My case was similar, old-locale had been upgraded. And currently, haskell98 would not necessarily be rebuilt automatically - it would just break. The point is that time should _not_ be upgradeable with the current system, for the same reason as "cabal upgrade" is disabled. The alternative is to allow Cabal to rebuild itself _and_ GHC (and everything) safely. BTW, I just realized that thanks to static linking, the ghc and cabal binaries should never stop working - only using those libraries for building packages could break. If you always upgrade locally, you can always remove packages from the user DB as I did.
It might help if the release of random with GHC 7.0 had a tight dependency on the version of the time package shipped with it. Maybe all the core packages need tight dependencies.
If you mean it as a hack for cabal's brokenness, it could be OK, but otherwise than that, I believe it's a bad idea. GHC itself does not depend on any specific package; it's the compiled GHC package which has a tight dependency. So I don't like the concept. Furthermore, if a GHC release (whichever it is) had a tight dependency, that would be annoying for when you want to compile that release against different libraries - if that is supposed to work. But actually, on my system I can't see ghc in cabal's DB; I can see that in the ghc-pkg database for binaries, but there all dependencies are tight, probably because they are dependencies among installed packages. Have a look: $ cabal info QuickCheck [...] Dependencies: mtl -any, base >=4 && <5, random -any, base >=3 && <4, random -any, base <3, ghc -any, extensible- exceptions -any $ ghc-pkg describe QuickCheck | grep depends depends: base-3.0.3.1 random-1.0.0.1 Best regards -- Paolo Giarrusso, PhD student http://www.informatik.uni-marburg.de/~pgiarrusso/

Hi all,
sadly, I can't post all details.
I just got a kernel panic and only parts of the detailed log were
saved - my bad (but this is something that kind-of never happens,
except, by Murphy's law, now). I could try reproducing it another
time, but it took some time, so for now I'll rely on my memory.
On Sat, Sep 11, 2010 at 20:56, Thomas DuBuisson
- when recompiling a package with ABI changes, does cabal always update dependent packages?
If Foo depends on Bar and there is a new version of Foo that specifies a newer version of Bar then yes, the newer library being depended on will be build too (out of necessity).
OTOH, if you are building a new version of a package on which others depend it won't build all the others. Ex: build a new "containers" package doesn't cause any of the ~1400 packages depending on it to be rebuilt.
However, the old containers package stays there. In my case, the same version of a package (old-time) was recompiled against a different version of one of its dependencies (old-library), and that broke the ABI. That's tricky. And Cabal recompiled most of the package which needed help, except Cabal itself.
It seems "not always" - it didn't update itself, nor refuse the breaking upgrade,
I don't really know what "it" is. Something to do with recompiling Cabal and cabal-install I take it, but I'll refrain from comment seeing as I don't understand what you're doing.
My bad - I wrote "it didn't update itself", but it would have been more precise to write "Cabal did not recompile itself". And probably also cabal-install should have been recompiled. In my case, I got cabal proposing the following change while I was trying to install pandoc-1.5: $ cabal install -v pandoc-1.5 [snip, see attached inst-pandoc-1.5-without-pref-old-loc-_1.txt for full output.] old-time-1.0.0.2 (reinstall) changes: old-locale-1.0.0.1 -> 1.0.0.2 [snip] When I did that, the result was that a symbol from old-time, referenced from the Cabal package, disappeared, preventing further package builds. It was possible because the symbol was oldzmtimezm1zi0zi0zi2_SystemziTime_a149_{closure,info}, i.e. an anonymous function. I wouldn't be surprised if the "a149" part were randomly generated, causing the ABI breakage at every rebuild. But recompiling against another version of a dependency is surely enough (I guess that if inlining is disabled these things can't happen). The net result, when Cabal was compiling the main Setup.hs of another package (texmath IIRC), was a linker error on that symbol. Luckily, the old version of old-time was in the system DB, so I could just uninstall the new version from the user DB (after I decoded the linker error, which wasn't trivial even knowing more about linking than one would want to know). An interesting fact is that cabal had recompiled many other packages depending on old-time (HTTP, directory, process, zip-archive), just not Cabal. Maybe recompiling Cabal could have fixed it, but I avoided trying, and reverted everything. The interesting part is that one would need to recompile it after building the new version of its dep ("old-time" here) but before installing it in place of the old binary, since after reinstalling old-time-1.0.0.2 and before installing the new Cabal, no package can be installed, including Cabal itself. That's rather tricky to do it, and it can't be done by hand. And cabal is probably unique in having this need - I've used CPAN, Gentoo's portage, and so on, and for various reasons (stable ABIs, no compilation going on, etc.) this scenario can't probably happen. After adding "preference: old-locale == 1.0.0.1", the output of $ cabal install -v pandoc-1.5 became the one in inst-pandoc-1.5-with-pref-old-loc-_1.txt, where no upgrade of old-time/old-locale is needed. Interestingly, HTTP, directory, process, zip-archive were not reinstalled, which confirms that Cabal had reinstalled them before just because of an upgrade to the dependencies. I managed to successfully remove old-locale-1.0.0.2 from my system and recompile all broken dependencies, even if it was rather tricky - even when ghc-pkg reported no errors, several dependencies were still broken, and the error message was crazy (sadly, I lost the log).
While package removal is not supported through cabal, it is sometimes needed
Why? What I see is a need for users to understand ghc-pkg (or whatever package management tool exists for their Haskell compiler). Should "cabal uninstall" provide a unified interface to some number of Haskell compiler packaging systems? It could but doesn't seem like a priority.
The frontend stuff is not a priority, but my point was not who should support uninstallation, but that it is necessary to support it, and that furthermore:
A related idea is that Gentoo had a tool which not only extracted such dependencies, but recompiled all affected packages.
If uninstalling packages replaced by other versions is supported at all, like on Gentoo, such a tool is important for Haskell, too. And uninstalling packages is sometimes needed. It is conceivable that after upgrading a package I want to get rid of the old version. In my case, I wanted to uninstall a package to "downgrade" it, because it caused breakage; and I wanted to get its dependencies recompiled. This is especially important since Haskell libraries don't have a stable ABI. Additionally, I could have a security breach in a Haskell library and want to guarantee an upgrade - not all vulnerabilities are C-specific (take cross-site scripting, SQL injection, bad handling of temporary files for setuid programs, bad validation of user input for a mail server...). Like this: http://www.amateurtopologist.com/2010/04/23/security-vulnerability-in-haskel... ghc-pkg doesn't uninstall packages, it just deregisters them and doesn't remove the files. Indeed, there's already a ticket open for having 'cabal uninstall': http://hackage.haskell.org/trac/hackage/ticket/234 I should just try adding a ticket for the above idea. -- Paolo Giarrusso - Ph.D. Student http://www.informatik.uni-marburg.de/~pgiarrusso/

On Saturday 11 September 2010 20:38:21, Paolo Giarrusso wrote:
Hi, after Andrew Coppin's odissey [1], I also had a few problem with cabal, which stopped installing new packages. Details will follow; meanwhile, here are some questions prompted by what happened:
- when recompiling a package with ABI changes, does cabal always update dependent packages? It seems "not always"
cabal will not automatically update packages which depend on an updated package. Since it doesn't keep a record of packages depending on foo, it would have to check all installed packages on every update to do that.
- it didn't update itself, nor refuse the breaking upgrade, and the ABI breakage caused linker errors on Setup.hs. Luckily, cabal was already linked so I could always use it for queries. - can cabal extract and check dependencies as specified in the .hi files?
No.
I had a broken dependency which I could see with "grep" but not otherwise.
ghc-pkg check?
- is there a "specification" of which are the "core" packages?
"core" as in *do not update*? Basically, what comes with GHC shouldn't be updated. Though I heard updating Cabal was okay.
Not sure if this is more useful as a bug report or as discussion, or if I just misused cabal and it's the only perfect software in the world; still I wanted to share my experience.
It's not perfect, and its guards against users breaking their package dbs aren't strong enough yet. But I wouldn't want to go back to a Haskell without it.
Also I am not sure if this is the best place - but I'm not subscribed to other lists, and the previous "cabal odissey" was posted here, so I hope it's fine.
Sure, it's about Haskell, so it's on topic here.

On Sat, Sep 11, 2010 at 21:43, Daniel Fischer
On Saturday 11 September 2010 20:38:21, Paolo Giarrusso wrote:
Hi, after Andrew Coppin's odissey [1], I also had a few problem with cabal, which stopped installing new packages. Details will follow; meanwhile, here are some questions prompted by what happened:
- when recompiling a package with ABI changes, does cabal always update dependent packages? It seems "not always"
cabal will not automatically update packages which depend on an updated package. Since it doesn't keep a record of packages depending on foo, it would have to check all installed packages on every update to do that.
ghc-pkg has such a dependency database, and cabal used it to correctly recompile other packages broken by an upgrade, but not the Cabal library, which resulted in linking errors for Setup.hs and impossibility of further upgrades.
- it didn't update itself, nor refuse the breaking upgrade, and the ABI breakage caused linker errors on Setup.hs. Luckily, cabal was already linked so I could always use it for queries. - can cabal extract and check dependencies as specified in the .hi files?
No.
I had a broken dependency which I could see with "grep" but not otherwise.
ghc-pkg check?
I tried that, but it didn't notice any breakage. As far as I understand, a package won't depend on packages which were hidden during its build, and all available packages will be recorded as dependencies in the ghc-pkg DB. Yet, for some reason this did not work.
- is there a "specification" of which are the "core" packages?
"core" as in *do not update*?
Yep, as in the output of "cabal upgrade" ("Note that it is generally not recommended to upgrade core packages."), and in the Cabal FAQ I linked. Note that cabal install --help gives no warning, which makes a huge difference - I need to try "cabal upgrade" to learn what I shouldn't try to install.
Basically, what comes with GHC shouldn't be updated. Though I heard updating Cabal was okay.
I did update Cabal here to the latest version, while still using GHC 6.10.4 (I'll upgrade that later). My experience suggests that the answer is probably "ghc and cabal dependencies", and that cabal should have special handling for this case, as suggested in the other mail: recompile core dependencies, any broken package, and only _then_ reinstall everything together - in my case, its choices broke package upgrade - I was installing just pandoc. I could have done something not so good before, but the final damage was cabal's choice.
Not sure if this is more useful as a bug report or as discussion, or if I just misused cabal and it's the only perfect software in the world; still I wanted to share my experience.
It's not perfect, and its guards against users breaking their package dbs aren't strong enough yet. But I wouldn't want to go back to a Haskell without it.
Agreed on both - I want my criticism to be constructive, even if I might have phrased it not perfectly. And when I admitted that maybe cabal was perfect, and maybe I used it badly, it might have sounded as ironic, but I did mean it, even if I tried my best to check that it wasn't the case. -- Paolo Giarrusso - Ph.D. Student http://www.informatik.uni-marburg.de/~pgiarrusso/

On Sat, Sep 11, 2010 at 22:29, Paolo Giarrusso
On Sat, Sep 11, 2010 at 21:43, Daniel Fischer
wrote:
I had a broken dependency which I could see with "grep" but not otherwise.
ghc-pkg check?
I tried that, but it didn't notice any breakage.
As far as I understand, a package won't depend on packages which were hidden during its build, and all available packages will be recorded as dependencies in the ghc-pkg DB. Yet, for some reason this did not work. Sorry for my phrasing - I was explaining my understanding of how dependencies are discovered, but it was unclear.
I wanted to add is that I discovered this with an error similar to: <command line>: unknown package old-locale-1.0.0.2 where the mentioned module had just been unregistered. BTW, there's no obvious way to re-register a module because the file to pass to ghc-pkg register is not saved on disk. To debug it, I used cabal install -v, then I had to manually invoke ghc -v (because cabal install didn't pass -v to ghc), then finally used grep because the command line did not mention the offending module. -- Paolo Giarrusso - Ph.D. Student http://www.informatik.uni-marburg.de/~pgiarrusso/

On 9/11/10 3:43 PM, Daniel Fischer wrote:
- is there a "specification" of which are the "core" packages?
"core" as in *do not update*? Basically, what comes with GHC shouldn't be updated. Though I heard updating Cabal was okay.
I tried updating Cabal once (recently) and it broke things in the same way. FWIW. -- Live well, ~wren

On Sep 12, 2:51 am, wren ng thornton
On 9/11/10 3:43 PM, Daniel Fischer wrote:
- is there a "specification" of which are the "core" packages?
"core" as in *do not update*? Basically, what comes with GHC shouldn't be updated. Though I heard updating Cabal was okay.
I tried updating Cabal once (recently) and it broke things in the same way. FWIW.
My Cabal was upgraded and is alive, but the bugs we see don't make the upgrade reliable. Most variations of the trap I hit would have killed cabal, period. Some could have killed GHC. And even for my case, resurrecting my old friend Cabal after his attempted suicide was not for the faint-heart, and took much more time than letting him die and replacing him with some other friend, i.e. reinstalling from scratch the user packages - I just never allow my programmed friends to die, if I can avoid that. I value their liveness :-D. -- Paolo Giarrusso - Ph.D. Student http://www.informatik.uni-marburg.de/~pgiarrusso/

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 9/11/10 15:43 , Daniel Fischer wrote:
On Saturday 11 September 2010 20:38:21, Paolo Giarrusso wrote:
- is there a "specification" of which are the "core" packages?
"core" as in *do not update*? Basically, what comes with GHC shouldn't be updated. Though I heard updating Cabal was okay.
Some consistency would be nice; IIRC GHC refers to them as "boot libraries". - -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkyNWd0ACgkQIn7hlCsL25V/XgCePB0l/4kq3VGUgHlK7R5foRTh D2IAnj57oxPA2TGuiJQ+rHkZbVSP9aDB =FYF8 -----END PGP SIGNATURE-----

Hi all,
I would like to share a new very cool result given by the current
Cabal: if packages FOO, BAR and FOOBAR exist, such that FOO depends on
BAR which depends on FOOBAR, and all three are installed, you just
can't safely upgrade FOOBAR. When the same version of BAR is ever
recompiled against the new FOOBAR, its ABI might change and FOO _will_
thus break if you're not very lucky.
Call it the No Upgrade Theorem, if you want.
Note that upgrades of FOO and BAR can still be safe: the key
hypothesis is a dependency chain of 3 packages, and the thesis is that
the deepest of the three ones in the chain (FOOBAR in the above
example) can't be upgraded safely.
Not that it's necessarily Cabal's "fault", IMHO. Given this kind of
cross-module inlining, proper dependency handling (which includes what
we discussed) seems insanely complicated to get right. Of course,
Cabal can be fixed, but I would call that a major achievement.
On Sep 13, 12:53 am, Brandon S Allbery KF8NH
On 9/11/10 15:43 , Daniel Fischer wrote:
On Saturday 11 September 2010 20:38:21, Paolo Giarrusso wrote:
- is there a "specification" of which are the "core" packages?
"core" as in *do not update*? Basically, what comes with GHC shouldn't be updated. Though I heard updating Cabal was okay.
Some consistency would be nice; IIRC GHC refers to them as "boot libraries".
"Boot libraries" clearly refers to GHC bootstrapping. But with a statically linked GHC (like on my system at least, and maybe everywhere), GHC dependencies are irrelevant, only "transitive dependency closure of (those used by Cabal + those used by any programs (base, ghc-prim, rts, integer...))" are relevant. Which includes an additional package, "pretty", and excludes many other. To verify this theory, I upgraded "bytestring" (on which GHC but not Cabal depend), and I can still configure+build+copy packages. But of course, bytestring fulfills the hypothesis of the above theorem, so installing a package with enough dependencies might trigger actual breakage. That's however not because ghc depends on bytestring. Only Cabal is special, because if you break that you can't recompile anything any more, and because recompiling Cabal dependencies and Cabal takes special effort (as said elsewhere). That's why "core packages" should include just cabal (recursive) deps. Fixing the "No Upgrade Theorem" is easier, just recompile more stuff and/or give more warnings. -- Paolo Giarrusso - Ph.D. Student http://www.informatik.uni-marburg.de/~pgiarrusso/

Hi Paolo, Paolo Giarrusso wrote:
- when recompiling a package with ABI changes, does cabal always update dependent packages?
It never recompiles them. Recompilation should not be needed, because different versions of packages exports different symbols, so a package can never be linked against the wrong version of its dependency. However, see the following tickets: http://hackage.haskell.org/trac/hackage/ticket/701 http://hackage.haskell.org/trac/hackage/ticket/704
Interestingly, HTTP, directory, process, zip-archive were not reinstalled, which confirms that Cabal had reinstalled them before just because of an upgrade to the dependencies.
I think you are misinterpreting this. When you asked cabal-install to install pandoc, it tried to make a consistent install plan for all its transitive dependencies. cabal-install will not touch a package which is not a transitive dependency of the package you request to be installed. Therefore, cabal-install will not touch Cabal if you ask it to install pandoc. To make a consistent install plan, cabal-install has to pick exactly one version number for each of the transitive dependencies, so that all version constraints of all transitive dependencies are fullfilled. For some reason, cabal-install picked old-locale-1.0.0.2 instead of the already installed old-locale-1.0.0.1, and newer versions of HTTP, directory etc. too. I think this is the bug: cabal-install should not be allowed to install old-locale, because doing so apparantly causes havoc. Looking at the inter-dependencies of pandoc's transitive dependencies, I do not see a reason to install a new version of a package instead of keeping the old. Maybe it's somehow related to the transition from base-3 to base-4? But I don't know how cabal-install decides which install plan to follow anyway. Tillmann

Hi!
First, sorry for some confusion - I wanted to post further details
needed for the analysis in the mail I lost, and I had excluded them
from my summary to keep it short.
On Sun, Sep 12, 2010 at 15:26, Tillmann Rendel
Hi Paolo,
Paolo Giarrusso wrote:
- when recompiling a package with ABI changes, does cabal always update dependent packages?
It never recompiles them. Recompilation should not be needed, because different versions of packages exports different symbols, so a package can never be linked against the wrong version of its dependency.
= the one they needed. An "Hello World program", using no recent functions, could thus be backportable even to an earlier version of
My problem was with reinstalling the same version of a package. Again,
sorry for the confusion (see above). cabal should then recompile (not
"update", strictly speaking) dependent packages:
Recompiling the same version of a package does not always yield the
same result, ABI-wise.
The problem was an ABI difference between two compilation results for
old-time-1.0.0.2, one linked against old-locale-1.0.0.1, the other
against version 1.0.0.2 of the same package. The ABI difference was
_probably_ due to different cross-module inlining decisions: the
disappeared symbols were:
oldzmtimezm1zi0zi0zi2_SystemziTime_a149_{closure,info},
i.e. (I assume) old-time-1.0.0.2-System.Time.a149. That name is
generated by GHC during optimization, and my educated guess is that it
is exported to allow cross-module inlining. In particular, a149 is
mentioned in the .hi interface of old-time's System.Time - <libs
dir>/old-time-1.0.0.2/System/Time.hi
It thus _seems_ that the ABI of a module is a (pure) function of the
versions of all its (transitive) dependencies, and their compilation
options - in particular, the optimization options and the used
compiler version.
More formally, my conjecture is:
- let us represent "package a depends on b" as "a =>> b", where a, b
are package names tagged with a version
- let =>>* be the transitive-reflexive closure of =>>
we need to compute:
DEPS(p) = {q | p =>>* q }
TAGGED_DEPS(p) = { (q, compilation_opts(q)) | q \in DEPS(p) }
where compilation_opts(q) are the compilation options used to compile package q.
Then, the ABI of a module is a pure function of TAGGED_DEPS(p), not
just of p. My experience proves at least that the ABI does not depend
just on p.
And since, after the discussion up-to-now, it turns out that this is
unexpected, I reported this as a bug:
http://hackage.haskell.org/trac/hackage/ticket/738
Another result of the above is that a mere "cabal install --reinstall
-O2 FooPackage", recompiling the same version of FooPackage with -O2
instead of the default (-O1), requires recompiling all packages which
depends on (i.e. use symbols from) FooPackage; recompiling them
implies recompiling packages depending on them, and so on, in a
"transitively closed" way.
However, see the following tickets:
http://hackage.haskell.org/trac/hackage/ticket/701 http://hackage.haskell.org/trac/hackage/ticket/704
Had a look, thanks - but they do not apply here, the problem is with Haskell symbols.
Interestingly, HTTP, directory, process, zip-archive were not reinstalled, which confirms that Cabal had reinstalled them before just because of an upgrade to the dependencies.
I think you are misinterpreting this.
When you asked cabal-install to install pandoc, it tried to make a consistent install plan for all its transitive dependencies. cabal-install will not touch a package which is not a transitive dependency of the package you request to be installed. Therefore, cabal-install will not touch Cabal if you ask it to install pandoc.
Thank you very much for the explanation. Note that the policy you describe is not necessarily "the right one", it is just a choice which can work, with the fixes you propose (read on for alternatives).
To make a consistent install plan, cabal-install has to pick exactly one version number for each of the transitive dependencies, so that all version constraints of all transitive dependencies are fullfilled. For some reason, cabal-install picked old-locale-1.0.0.2 instead of the already installed old-locale-1.0.0.1, and newer versions of HTTP, directory etc. too.
My case was slightly different, old-locale-1.0.0.2 (user-level) and .1 (system level) were already present. old-time-1.0.0.2 was installed at the system level, and linked against the older, system-level old-locale. So, while no dependency was broken, old-locale had been upgraded, by me or by cabal on its own. Even if it was my fault, only "cabal upgrade" gives a warning, while "cabal install --help" doesn't mention the issue, which is "suboptimal" :-D. And of course, cabal should prevent a user from shooting himself in his foot, but that's a more advanced feature. A consistent plan for installing pandoc would have been to either compile it using the older old-locale, or to recompile old-time against the new version. Sadly, cabal chose the latter alternative, since it is not obviously wrong, and this broke everything, except packages on which pandoc depended: they were recompiled because of the changed dep. I later forced the other install plan, with some pain, but it works. So cabal behavior is inconsistent: it knows that recompiling a package means recompiling its dependencies, but it should either do so also for packages unrelated to the one being installed, or it should not recompile any package (as you suggest).
I think this is the bug: cabal-install should not be allowed to install old-locale, because doing so apparantly causes havoc.
Lacking a proper dependency handling (which _is_ not trivial), yes. And it's the best current choice. But one can recompile and upgrade on a running system, without downtime nor glitches, both the C standard library and libraries without stable ABIs (see Gentoo Linux). Thus, upgrading old-locale and automatically rebuilding its dependencies would be a sensible feature. It just should not happen unless the user requests a full upgrade. Finally, while old-locale is a core package and one can restrict its upgrades, if I recompile non-core package A, and non-core package B breaks, that's still "havoc" - the only difference is that B is not cabal, and the bug is thus less severe and you can recompile manually B. OTOH, fixing this less severe and more complicated bug, via transitively closed dependency tracking as described above, helps supporting complete upgrades.
Looking at the inter-dependencies of pandoc's transitive dependencies, I do not see a reason to install a new version of a package instead of keeping the old.
Maybe it's somehow related to the transition from base-3 to base-4? The newer old-locale was already installed - see above my description of the two possible install plans.
For the transition, I had added: preference: base >= 4 based on a suggestion from somebody on haskell-cafe and the resulting discussion. -- Paolo Giarrusso - Ph.D. Student http://www.informatik.uni-marburg.de/~pgiarrusso/
participants (11)
-
Brandon S Allbery KF8NH
-
Claus Reinke
-
Daniel Fischer
-
Duncan Coutts
-
Jason Dagit
-
Paolo G. Giarrusso
-
Paolo Giarrusso
-
Ross Paterson
-
Thomas DuBuisson
-
Tillmann Rendel
-
wren ng thornton