
You need a way to specify "foo > 1.2 && foo < 2", which is a suggestion that was tossed around here recently.
but what does such a version range say? that i haven't tested any versions outside the range (because they didn't exist when i wrote my package)? or that i have, and know that later versions won't do?
Also, you'd need foo 1.x for x>=2 to be available after foo-2.0 arrives.
indeed. available, and selectable. so the package manager needs to be able to tell which package versions can be used to fulfill which dependencies. if that decision is based on version numbers alone, we need to be specific about the meaning of version numbers in dependencies. and if the major/minor scheme is to be interpreted as Simon summarised, the only acceptable form of a dependency is an explicit version range (the range of versions known to work). which means that package descriptions have to be revisited (manually) and updated (after inspection) as time goes on. so we seem to be stuck with a choice between breaking packages randomly (because version numbers were too imprecise to prevent breakage accross dependency updates) or having packages unable to compile (because version numbers were needlessly conservative, and newer dependencies that may be acceptable in practice are not listed). neither option sounds promising to me (instead of the package manager managing, it only keeps a record while i have to do the work), so i wonder why everyone else claims to be happy with the status quo?
'base' aside, I don't think we want a system that requires us to rename a library any time incompatible changes are introduced.
i was talking only about the base split, as far as renaming is concerned. but i still don't think the interpretations and conventions of general haskell package versioning have been pinned down sufficiently. and i still see lots of issues in current practice, even after assuming some common standards.
The major/minor scheme has worked nicely for .so for ages.
i'm not so sure about that. it may be better than alternatives, but it includes standards of common practice, interpretation, and workarounds (keep several versions of a package, have several possible locations for packages, renumber packages to bridge gaps or to fake unavailable versions, re-export functionality from specific package versions as generic ones, ...). and i don't think cabal packages offer all the necessary workarounds, even though they face all the same issues.
how about using a provides/expects system instead of betting on version numbers? if a package X expects the functionality of base-1.0, cabal would go looking not for packages that happen to share the name, but for packages that provide the functionality. base-not-1.0 would know that it doesn't do that. and if there is no single package that reexports the functionality of base-1.0, cabal could even try to consult multiple packages to make ends meet
Scrap cabal in favor of 'ghc --make'? :-)
ultimately, perhaps that is something to aim for. i was thinking of a simpler form, though, just liberating the provider side a bit: - currently, every package provides it's own version only; it is the dependent's duty to figure out which providers may or may not be suitable; this introduces a lot of extra work, and means that no package is ever stable - even if nothing in the package changes, you'll have to keep checking and updating the dependencies! - instead, i suggest that every package can stand for a range of versions, listing all those versions it is compatible with; that way, the dependent only needs to specify one version, and it becomes the provider's duty to check and specify which api uses it is compatible with (for instance, if a package goes through several versions because of added features, it will still be useable with its initial, limited api). of course, if you refine that simple idea, you get to typed interfaces as formally checkable specifications of apis (as in the ML's, for instance). and then you'd need something like 'ghc --make' or 'ghc -M' to figure out the precise interface a package depends on, and to provide a static guarantee that some collection of packages will provide those dependencies.
Seriously though, how hard would it be to automatically generate a (suggested) build-depends from ghc --make?
i'd like to see that, probably from within ghci. currently, you'd have to load your project's main module, then capture the 'loading package ...' lines. there is a patch pending for ghci head which would give us a ':show packages' command. claus