Re: [Haskell] Re: Trying to install binary-0.4

Claus Reinke wrote:
but calling "split-base" "base" goes directly against all basic assumptions of all packages depending on "base".
The new base will have a new version number. There is no expectation of compatibility when the major version is bumped; but we do have an informal convention that minor version bumps only add functionality, and sub-minor version bumps don't change the API at all. So a package that depends on 'base' (with no upper version bound) *might* be broken in GHC 6.8.1, depending on which modules from base it actually uses. Let's look at the other options: - if we rename base, the package will *definitely* be broken - if the package specified an upper bound on its base dependency, it will *definitely* be broken In the design we've chosen, some packages continue to work without change. Specifying a dependency on a package without giving an explicit version range is a bet: sometimes it wins, sometimes it doesn't. The nice thing is that we have most of our packages in one place, so we can easily test which ones are broken and notify the maintainers and/or fix them. Another reason not to change the name of 'base' is that there would be a significant cost to doing so: the name is everywhere, not just in the source code of GHC and its tools, but wiki pages, documentation, and so on. Yes I know we've changed other names - very little in packaging is clear-cut. Cheers, Simon

but calling "split-base" "base" goes directly against all basic assumptions of all packages depending on "base".
The new base will have a new version number. There is no expectation of compatibility when the major version is bumped; but we do have an informal convention that minor version bumps only add functionality, and sub-minor version bumps don't change the API at all.
if this is the "official" interpretation of cabal package version numbers, could it please be made explicit in a prominent position in the cabal docs? of course, i have absolutely no idea how to write stable packages under this interpretation. and the examples in the cabal docs do not explain this, either (neither "bar" nor "foo > 1.2" are any good under this interpretation).
So a package that depends on 'base' (with no upper version bound) *might* be broken in GHC 6.8.1, depending on which modules from base it actually uses. Let's look at the other options:
- if we rename base, the package will *definitely* be broken
- if the package specified an upper bound on its base dependency, it will *definitely* be broken
why do you omit the most popular (because most obvious to users) option? - if base remains what it is and a new package is created providing the rest of base after the split, then every user is happy (that it is currently hard to implement this by reexporting the split packages as base is no excuse)
In the design we've chosen, some packages continue to work without change.
Specifying a dependency on a package without giving an explicit version range is a bet: sometimes it wins, sometimes it doesn't. The nice thing is that we have most of our packages in one place, so we can easily test which ones are broken and notify the maintainers and/or fix them.
sorry, i don't want to turn package management into a betting system. and i don't see how knowing how much is broken (so cabal can now only work with central hackage?) is any better than avoiding such breakage in the first place. cabal is fairly new and still under development, so there is no need to build in assumptions that are sure to cause grief later (and indeed are doing so already).
Another reason not to change the name of 'base' is that there would be a significant cost to doing so: the name is everywhere, not just in the source code of GHC and its tools, but wiki pages, documentation, and so on.
but the name that is everywhere does not stand for what the new version provides! any place that is currently referring to 'base' will have to be inspected to check whether it will or will not work with the reduced base package. and any place that is known to work with the new base package might as well make that clear, by using a different name.
Yes I know we've changed other names - very little in packaging is clear-cut.
how about using a provides/expects system instead of betting on version numbers? if a package X expects the functionality of base-1.0, cabal would go looking not for packages that happen to share the name, but for packages that provide the functionality. base-not-1.0 would know that it doesn't do that. and if there is no single package that reexports the functionality of base-1.0, cabal could even try to consult multiple packages to make ends meet (provided that someone told it that 'expects: base' can be met by 'provides: rest-base containers ..'). claus

"Claus Reinke"
if this is the "official" interpretation of cabal package version numbers, could it please be made explicit in a prominent position in the cabal docs?
Me too. This is not a criticism nor endorsement of any particular scheme, just a vote in favor of having a - one, single, universal - scheme.
of course, i have absolutely no idea how to write stable packages under this interpretation. and the examples in the cabal docs do not explain this, either (neither "bar" nor "foo > 1.2" are any good under this interpretation).
You need a way to specify "foo > 1.2 && foo < 2", which is a suggestion that was tossed around here recently. Also, you'd need foo 1.x for x>=2 to be available after foo-2.0 arrives. 'base' aside, I don't think we want a system that requires us to rename a library any time incompatible changes are introduced. The major/minor scheme has worked nicely for .so for ages. I'd like to make the additional suggestion that a major version number of 0 means no compatibility guarantees.
Another reason not to change the name of 'base' is that there would be a significant cost to doing so: the name is everywhere, not just in the source code of GHC and its tools, but wiki pages, documentation, and so on.
Much like 'fps', now known as 'bytestring', no? I had some problems finding it, true, but the upside is that old stuff is free to reference fps until I can get around to test and update things.
how about using a provides/expects system instead of betting on version numbers? if a package X expects the functionality of base-1.0, cabal would go looking not for packages that happen to share the name, but for packages that provide the functionality. base-not-1.0 would know that it doesn't do that. and if there is no single package that reexports the functionality of base-1.0, cabal could even try to consult multiple packages to make ends meet
Scrap cabal in favor of 'ghc --make'? :-) Seriously though, how hard would it be to automatically generate a (suggested) build-depends from ghc --make? -k -- If I haven't seen further, it is by standing in the footprints of giants

You need a way to specify "foo > 1.2 && foo < 2", which is a suggestion that was tossed around here recently.
but what does such a version range say? that i haven't tested any versions outside the range (because they didn't exist when i wrote my package)? or that i have, and know that later versions won't do?
Also, you'd need foo 1.x for x>=2 to be available after foo-2.0 arrives.
indeed. available, and selectable. so the package manager needs to be able to tell which package versions can be used to fulfill which dependencies. if that decision is based on version numbers alone, we need to be specific about the meaning of version numbers in dependencies. and if the major/minor scheme is to be interpreted as Simon summarised, the only acceptable form of a dependency is an explicit version range (the range of versions known to work). which means that package descriptions have to be revisited (manually) and updated (after inspection) as time goes on. so we seem to be stuck with a choice between breaking packages randomly (because version numbers were too imprecise to prevent breakage accross dependency updates) or having packages unable to compile (because version numbers were needlessly conservative, and newer dependencies that may be acceptable in practice are not listed). neither option sounds promising to me (instead of the package manager managing, it only keeps a record while i have to do the work), so i wonder why everyone else claims to be happy with the status quo?
'base' aside, I don't think we want a system that requires us to rename a library any time incompatible changes are introduced.
i was talking only about the base split, as far as renaming is concerned. but i still don't think the interpretations and conventions of general haskell package versioning have been pinned down sufficiently. and i still see lots of issues in current practice, even after assuming some common standards.
The major/minor scheme has worked nicely for .so for ages.
i'm not so sure about that. it may be better than alternatives, but it includes standards of common practice, interpretation, and workarounds (keep several versions of a package, have several possible locations for packages, renumber packages to bridge gaps or to fake unavailable versions, re-export functionality from specific package versions as generic ones, ...). and i don't think cabal packages offer all the necessary workarounds, even though they face all the same issues.
how about using a provides/expects system instead of betting on version numbers? if a package X expects the functionality of base-1.0, cabal would go looking not for packages that happen to share the name, but for packages that provide the functionality. base-not-1.0 would know that it doesn't do that. and if there is no single package that reexports the functionality of base-1.0, cabal could even try to consult multiple packages to make ends meet
Scrap cabal in favor of 'ghc --make'? :-)
ultimately, perhaps that is something to aim for. i was thinking of a simpler form, though, just liberating the provider side a bit: - currently, every package provides it's own version only; it is the dependent's duty to figure out which providers may or may not be suitable; this introduces a lot of extra work, and means that no package is ever stable - even if nothing in the package changes, you'll have to keep checking and updating the dependencies! - instead, i suggest that every package can stand for a range of versions, listing all those versions it is compatible with; that way, the dependent only needs to specify one version, and it becomes the provider's duty to check and specify which api uses it is compatible with (for instance, if a package goes through several versions because of added features, it will still be useable with its initial, limited api). of course, if you refine that simple idea, you get to typed interfaces as formally checkable specifications of apis (as in the ML's, for instance). and then you'd need something like 'ghc --make' or 'ghc -M' to figure out the precise interface a package depends on, and to provide a static guarantee that some collection of packages will provide those dependencies.
Seriously though, how hard would it be to automatically generate a (suggested) build-depends from ghc --make?
i'd like to see that, probably from within ghci. currently, you'd have to load your project's main module, then capture the 'loading package ...' lines. there is a patch pending for ghci head which would give us a ':show packages' command. claus

On Mon, Oct 15, 2007 at 10:57:48PM +0100, Claus Reinke wrote:
so i wonder why everyone else claims to be happy with the status quo?
We aren't happy with the status quo. Rather, we know that no matter how much we do, the situation will never improve, so most of us have stopped wasting out time. Furthermore, we know that people who DO offer alternatives instantly lose all public credibility - look at what happened to Alex Jacobson. Stefan (who will readily admit his bleak outlook)

stefanor:
On Mon, Oct 15, 2007 at 10:57:48PM +0100, Claus Reinke wrote:
so i wonder why everyone else claims to be happy with the status quo?
We aren't happy with the status quo. Rather, we know that no matter how much we do, the situation will never improve, so most of us have stopped wasting out time. Furthermore, we know that people who DO offer alternatives instantly lose all public credibility - look at what happened to Alex Jacobson.
Stefan (who will readily admit his bleak outlook)
Be happy: we're about 15 years ahead of the lisp guys. 'cabal install xmonad' works, for example. -- Don

Don Stewart wrote:
stefanor:
On Mon, Oct 15, 2007 at 10:57:48PM +0100, Claus Reinke wrote:
so i wonder why everyone else claims to be happy with the status quo? We aren't happy with the status quo. Rather, we know that no matter how much we do, the situation will never improve, so most of us have stopped wasting out time. Furthermore, we know that people who DO offer alternatives instantly lose all public credibility - look at what happened to Alex Jacobson.
Stefan (who will readily admit his bleak outlook)
Be happy: we're about 15 years ahead of the lisp guys. 'cabal install xmonad' works, for example.
-- Don
And that, I think, will be the key to the solution. Keeping the repository of interdependent libraries consistent is hard, but it is only a means to an goal. That goal is applications, not libraries. My definition of the right version of libFoo to use is whatever is needed to make an application, such as xmonad, work. -- Chris

Be happy: we're about 15 years ahead of the lisp guys. 'cabal install xmonad' works, for example.
- not on windows (and since it is popular, it will seduce more good haskellers not to bother with windows compatibility.. :-( - from xmonad.cabal (version 0.3, from hackage): build-depends: base>=2.0, X11>=1.2.1, X11-extras>=0.3, mtl>=1.0, unix>=1.0 so, you guarantee that it will work with base-3.0, X11-2.0, X11-extras-1.0, mtl-2.0, unix-2.0. even though all of those will -if i now understand the versioning intentions correctly- lack features of the current versions? claus

"Claus Reinke"
You need a way to specify "foo > 1.2 && foo < 2", which is a suggestion that was tossed around here recently.
but what does such a version range say? that i haven't tested any versions outside the range (because they didn't exist when i wrote my package)? or that i have, and know that later versions won't do?
IMO, it says that it works with interface version 1, and needs some stuff from sublevel 2, and as long as the foo developers keep their end of the bargain, it will continue to work with new releases in the 1-series. For foo-2, the interface may change, and all bets are off. The dependency could be expressed more in a more succinct (albeit less flexible) manner with a different syntax (e.g. "foo-1.2").
if that decision is based on version numbers alone, we need to be specific about the meaning of version numbers in dependencies.
Yes.
and if the major/minor scheme is to be interpreted as Simon summarised, the only acceptable form of a dependency is an explicit version range (the range of versions known to work).
I'm happy with "expected to work".
The major/minor scheme has worked nicely for .so for ages.
i'm not so sure about that. it may be better than alternatives, but [..]
Also, it sees a lot of testing, at least in current Linux distributions. The point is that the end-user experience is pretty good. -k -- If I haven't seen further, it is by standing in the footprints of giants

On Oct 16, 2007, at 4:21 , Ketil Malde wrote:
The major/minor scheme has worked nicely for .so for ages.
i'm not so sure about that. it may be better than alternatives, but [..]
Also, it sees a lot of testing, at least in current Linux distributions. The point is that the end-user experience is pretty good.
Except it doesn't, quite; note how many packages have started embedding the version in the soname (e.g. foo-1.2.so.*). -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

Claus Reinke wrote:
Simon Marlow wrote:
Another reason not to change the name of 'base' is that there would be a significant cost to doing so: the name is everywhere, not just in the source code of GHC and its tools, but wiki pages, documentation, and so on.
but the name that is everywhere does not stand for what the new version provides! any place that is currently referring to 'base' will have to be inspected to check whether it will or will not work with the reduced base package. and any place that is known to work with the new base package might as well make that clear, by using a different name.
base changed its API between 2.0 and 3.0, that's all. The only difference between what happened to the base package between 2.0 and 3.0 and other packages is the size of the changes. In fact, base 3.0 provides about 80% the same API as version 2.0. Exactly what percentage change should in your opinion require changing the name of the package rather than just changing its version number? Neither 0% nor 100% are good choices... packaging is rarely clear-cut! Cheers, Simon

but the name that is everywhere does not stand for what the new version provides! any place that is currently referring to 'base' will have to be inspected to check whether it will or will not work with the reduced base package. and any place that is known to work with the new base package might as well make that clear, by using a different name.
base changed its API between 2.0 and 3.0, that's all. The only difference between what happened to the base package between 2.0 and 3.0 and other packages is the size of the changes. In fact, base 3.0 provides about 80% the same API as version 2.0.
so it is not just an api extension, nor an api modification with auxiliary definitions to preserve backwards api compatibility, nor a deprecation warning for api features that may disappear in the distant future; it is an api shrinkage - features that used to be available from dependency 'base' no longer are. and it isn't just any package, it is 'base'! the decision to make the difference visible in package names was made when subpackages were created from the old base. if cabal can handle multiple versions of base coexisting, and can guess which version was meant when someone wrote 'base', then no renaming is necessary. but if cabal can't handle that (yet), then renaming might be a workaround, to avoid more trouble. if ghc told me that "expected type 'base' doesn't match inferred type 'base'", i'd file a bug report. why do we throw out such standards when grouping modules into packages?
Exactly what percentage change should in your opinion require changing the name of the package rather than just changing its version number? Neither 0% nor 100% are good choices... packaging is rarely clear-cut!
then we should ask: why not? it seems to be a standard type system problem: either we have no subtyping, then the types/versions/apis must match precisely, however inconvenient that might be, or we have subtyping, then we need to define what we want it to mean that one package version may be used instead of another. just having names and numbers and schemes that give no guarantees that matches imply compatibility is no solution. i don't want a package manager that tells me: "congratulations! your package is 88.745% likely to be buildable, it provides between 45% and 95% of the features your package spec promises (since all promises are informal, no precise validation is possible, but most users should be happy), provided that our dependencies really do provide all the features we depend on (i have no idea what those features might be). go ahead and publish it. let others clean up the mess. oh, and remember to come back every couple of months or so to clean up the mess made by those providing your package's dependencies.". of course, cabal doesn't even tell me that. it lets me publish anything (shouldn't there be a './Setup check' to validate? or is there?) and only gets involved when people try to build what i published, usually months later, when anything might happen (depending on how good my package spec was, and on what happened to the dependencies in the meantime), followed by someone chasing me, then me chasing someone else, or someone giving up. is this too bleak a view?-) claus

Claus Reinke wrote:
if this is the "official" interpretation of cabal package version numbers, could it please be made explicit in a prominent position in the cabal docs?
Yes - I think it would be a good idea to make that convention explicit somewhere (I'm sure we've talked about it in the past, but I can't remember what happened if anything). However, I'd like to separate it from Cabal. Cabal provides mechanism not policy, regarding version numbers.
of course, i have absolutely no idea how to write stable packages under this interpretation. and the examples in the cabal docs do not explain this, either (neither "bar" nor "foo > 1.2" are any good under this interpretation).
base >= 2.0 && < 3.0 I believe Cabal is getting (or has got?) some new syntax to make this simpler.
why do you omit the most popular (because most obvious to users) option?
- if base remains what it is and a new package is created providing the rest of base after the split, then every user is happy (that it is currently hard to implement this by reexporting the split packages as base is no excuse)
Omitted only because it isn't implemented. Well, it is implemented, on my laptop, but I'm not happy with the design yet.
In the design we've chosen, some packages continue to work without change.
Specifying a dependency on a package without giving an explicit version range is a bet: sometimes it wins, sometimes it doesn't. The nice thing is that we have most of our packages in one place, so we can easily test which ones are broken and notify the maintainers and/or fix them.
sorry, i don't want to turn package management into a betting system. and i don't see how knowing how much is broken (so cabal can now only work with central hackage?) is any better than avoiding such breakage in the first place.
cabal is fairly new and still under development, so there is no need to build in assumptions that are sure to cause grief later (and indeed are doing so already).
what assumptions does Cabal build in?
Yes I know we've changed other names - very little in packaging is clear-cut.
how about using a provides/expects system instead of betting on version numbers? if a package X expects the functionality of base-1.0, cabal would go looking not for packages that happen to share the name, but for packages that provide the functionality.
Using the version number convention mentioned earlier, "base-1.0" funcionality is provided by base-1.0.* only. A package can already specify that explicitly. I think what you're asking for is more than that: you want us to provide base-1.0, base-2.0 and base-3.0 at the same time, so that old packages continue to work without needing to be updated. That is possible, but much more work for the maintainer. Ultimately when things settle down it might make sense to do this kind of thing, but right now I think an easier approach is to just fix packages when dependencies change, and to identify sets of mutually-compatible packages (we've talked about doing this on Hackage before). Cheers, Simon

However, I'd like to separate it from Cabal. Cabal provides mechanism not policy, regarding version numbers.
but the examples in the cabal docs should reflect the standard interpretation of version numbers.
of course, i have absolutely no idea how to write stable packages under this interpretation. and the examples in the cabal docs do not explain this, either (neither "bar" nor "foo > 1.2" are any good under this interpretation).
base >= 2.0 && < 3.0
that only works if older versions of base are kept side by side with base >= 3.0. otherwise, any package with that range will refuse to build (which may be better than failing to build), even though nothing in that package has changed, and all the features it depends on are still available.
Omitted only because it isn't implemented. Well, it is implemented, on my laptop, but I'm not happy with the design yet.
i look forward to hearing more. here, you say you are working on an implementation; earlier, you said that re-exporting modules via several packages was not the way forward.
cabal is fairly new and still under development, so there is no need to build in assumptions that are sure to cause grief later (and indeed are doing so already).
what assumptions does Cabal build in?
its documentation is not very precise about what version numbers mean. going by the examples, i thought that 'base' was an acceptable dependency, but it isn't. i also assumed that lower bounds (foo > 1.2) could be relied on, but they can't. perhaps i'm the only one reading the cabal docs this way, but i feel mislead!-) and even if i translate your versioning scheme into cabal dependencies, i end up with explicit version ranges as the only valid option, so the assumption becomes that every package *will* break as its dependencies move on.
how about using a provides/expects system instead of betting on version numbers? if a package X expects the functionality of base-1.0, cabal would go looking not for packages that happen to share the name, but for packages that provide the functionality.
Using the version number convention mentioned earlier, "base-1.0" funcionality is provided by base-1.0.* only. A package can already specify that explicitly.
not entirely correct. you said that major versions implied api changes. that does not imply that the api is no longer backwards compatible, only that there are sufficiently substantial new features that a version naming them seems called for. while base breaks backwards compatibility, other packages might not do so. and cabal does not allow me to specify anything but a name and a range of numbers as dependencies (there is exposed-modules:, but no imported-modules:), so i can't say which parts of base-1.0 my package depends on, and cabal can't decide which versions of base might be compatible with those more specific dependencies.
I think what you're asking for is more than that: you want us to provide base-1.0, base-2.0 and base-3.0 at the same time, so that old packages continue to work without needing to be updated. That is possible, but much more work for the maintainer. Ultimately when things settle down it might make sense to do this kind of thing, but right now I think an easier approach is to just fix packages when dependencies change, and to identify sets of mutually-compatible packages (we've talked about doing this on Hackage before).
yes. it's called automatic memory management!-) as long as there's a package X depending on package Y-a.b, package Y-a.b should not disappear. not having to waste time on such issues is one reason why programmers are supposed to prefer haskell over non-functional languages, right?-) claus

On Tuesday 16 October 2007 11:45, Claus Reinke wrote:
how about using a provides/expects system instead of betting on version numbers? if a package X expects the functionality of base-1.0, cabal would go looking not for packages that happen to share the name, but for packages that provide the functionality.
Using the version number convention mentioned earlier, "base-1.0" funcionality is provided by base-1.0.* only. A package can already specify that explicitly.
not entirely correct. you said that major versions implied api changes. that does not imply that the api is no longer backwards compatible, only that there are sufficiently substantial new features that a version naming them seems called for. while base breaks backwards compatibility, other packages might not do so.
and cabal does not allow me to specify anything but a name and a range of numbers as dependencies (there is exposed-modules:, but no imported-modules:), so i can't say which parts of base-1.0 my package depends on, and cabal can't decide which versions of base might be compatible with those more specific dependencies.
I've been giving only cursory attention to this thread so I might have the wrong end of the stick, or indeed the entirely wrong shrub. If the convention for modifying package versions of form x.y.z is: - increment z for bugfixes/changes that don't alter the interface - increment y for changes that consist solely of additions to the interface, parts of the interface may be marked as deprecated - increment x for changes that include removal of deprecated parts of the interface - (optionally) x == 0 => no guarantee and package maintainers are rigorous in following these rules then specifying dependencies as foo-x, foo-x.y, foo-x.y.z should be sufficient. This rigour could largely be enforced by hackage or an automated build system. foo-x is a shortcut for foo-x.0.0 foo-x.y is a shortcut for foo-x.y.0 foo-x.y.z is satisfied by any foo-i.j.k where i=x, j.k>=y.z The 'foo' package name is just an indicator of lineage. foo-2.xxx is not the same package as foo-1.xxx, it's interface is missing something that foo-1.xxx's interface provided. Dependencies of "foo" shouldn't appear in published cabal files. There is a case for their use in development where you are specifying that you want to depend on the very latest version of foo available, perhaps from darcs. When you publish that latest version number gets burned in, eg "foo-2.1.20071016". As for provides/expects and imported-modules instead, isn't that just an arbitrary line drawn in the granularity sand? Perhaps package versions could be expanded to include the type of every function they expose, plus more information to indicate which bugfix version of those functions is present. That's maybe the Right Way... and probably a lot of work. A more convenient place to draw the line seems to be at the package level.
I think what you're asking for is more than that: you want us to provide base-1.0, base-2.0 and base-3.0 at the same time, so that old packages continue to work without needing to be updated. That is possible, but much more work for the maintainer. Ultimately when things settle down it might make sense to do this kind of thing, but right now I think an easier approach is to just fix packages when dependencies change, and to identify sets of mutually-compatible packages (we've talked about doing this on Hackage before).
yes. it's called automatic memory management!-) as long as there's a package X depending on package Y-a.b, package Y-a.b should not disappear. not having to waste time on such issues is one reason why programmers are supposed to prefer haskell over non-functional languages, right?-)
I think it's a no-brainer that old versions of packages should remain available for people to use for 'a long time'. If their dependencies are specified properly they should continue building successfully as time passes. Isn't the main problem the use of "foo" dependencies and the resulting version guessing/ambiguity? Presumably it's not usually a problem if indirect package dependencies require incompatible versions of a package. Is this a problem with base because it implicitly has a dependency on a particular version of the GHC internals? Dan

Daniel McAllansmith
I think what you're asking for is more than that: you want us to provide base-1.0, base-2.0 and base-3.0 at the same time, so that old packages continue to work without needing to be updated.
Yes.
That is possible, but much more work for the maintainer.
How much more work, really? If the dependencies of your library have similar backwards compatible support, you only have to keep track of backwards-incompatible changes to the compiler, and I think those are relatively few and far apart.
Ultimately when things settle down it might make sense to do this kind of thing, but right now I think an easier approach is to just fix packages when dependencies change, and to identify sets of mutually-compatible packages (we've talked about doing this on Hackage before).
I'm surprised you think this is easier - There's an awful lot of possible version combinations, and for every library that breaks, there is - at least potentially - a lot of applications that needs updating. Many of those will be web orphans that some curious newbie will download and fail to get to work. (SOE, anybody? FiniteMap to Data.Map?) I think a library is more likely to be supported than an application, and likely to be supported by more and more competent developers.
I think it's a no-brainer that old versions of packages should remain available for people to use for 'a long time'. If their dependencies are specified properly they should continue building successfully as time passes.
Amen.
Presumably it's not usually a problem if indirect package dependencies require incompatible versions of a package.
If it is, I think this is a strong argument in favor of "package bundles" that are released and upgraded together as something resembling a standard library. -k -- If I haven't seen further, it is by standing in the footprints of giants

If the convention for modifying package versions of form x.y.z is: - increment z for bugfixes/changes that don't alter the interface - increment y for changes that consist solely of additions to the interface, parts of the interface may be marked as deprecated - increment x for changes that include removal of deprecated parts of the interface
i like this, but i doubt it will catch on (see my reply to Simon's summary).
The 'foo' package name is just an indicator of lineage. foo-2.xxx is not the same package as foo-1.xxx, it's interface is missing something that foo-1.xxx's interface provided.
yes, that is the troublesome part.
Dependencies of "foo" shouldn't appear in published cabal files. There is a case for their use in development where you are specifying that you want to depend on the very latest version of foo available, perhaps from darcs. When you publish that latest version number gets burned in, eg "foo-2.1.20071016".
agreed, because of your point above. though i think we'll need to find a similarly convenient replacement.. or we'll be changing old cabal files forever.
As for provides/expects and imported-modules instead, isn't that just an arbitrary line drawn in the granularity sand? Perhaps package versions could be expanded to include the type of every function they expose, plus more information to indicate which bugfix version of those functions is present. That's maybe the Right Way... and probably a lot of work.
as with all type systems, there is a balance between preciseness, decidability, and useability. just adding an imported-modules: field would do no harm (like the exposed-modules: field, it should be inferred), but it would allow cabal to make better choices. in the context of the base split, or similar api refactorings, package names don't tell us much, package versions at best tell us that there is a problem (and may not even tell us that); if existing packages had an additional imported-modules: field, cabal could try to suggest alternative providers - in the current case, that would be the new base and its spin-off packages. then the user could just accept those alternatives, and be happy. claus

Simon Marlow wrote:
Ultimately when things settle down it might make sense to do this kind of thing, but right now I think an easier approach is to just fix packages when dependencies change, and to identify sets of mutually-compatible packages (we've talked about doing this on Hackage before).
Cheers, Simon
When coordinating distribution of separately maintained libraries and projects, the linux distributions do indeed "identify sets of mutually-compatible packages", quite often including small patchfiles to ensure compilation. Thus for linux, cabal is a layer below such apt and rpm repositories and blessing sets of packages would be done at a higher level. Once cabal is being used to automatically retrieve sets of working packages then it is easiest to write cabal to assume that hackage is fixed when dependencies change. As a practical matter, it is easy to see how to identify such sets. Since such sets must be installed by at least one person, that person's ghc-pkg listing is already a precise definition of the working set. All that might need to be done is to publish such a working set on hackage where cabal (or another tool) can see it. Cheers, Chris
participants (8)
-
Brandon S. Allbery KF8NH
-
ChrisK
-
Claus Reinke
-
Daniel McAllansmith
-
Don Stewart
-
Ketil Malde
-
Simon Marlow
-
Stefan O'Rear