
Thomas Schilling wrote:
apfelmus wrote:
In both cases, the basic idea is that the library user should *not* think about library versions, he just uses the one that is in scope on his system.
I think we mean the same thing.
Yes, albeit with the small difference that in my case, the library user never specifies version ranges, he only specifies the particular libraries he is using: build-depends: foo-0.42.1, bar-2.3.3 Whether other versions of foo and bar can be substituted for these will be determined later, based on information from the library author (f.i. the PVP). The simplest model would be that the library author specifies a transitive relation foo-y > foo-x <=> foo-y can be used in place of foo-x ("subsumes") Most often, this will just follow the PVP, but he could also specify things like foo-0.42 < foo-0.43 and the like. At build time, the system checks whether the user has libraries installed that subsume the build-dependencies. A more sophisticated model would be to associate > with some kind of wrapper function, i.e. something that converts foo-0.42.2 into foo-0.42.1 . Most of the time, this is simply the identity or a function that removes some exports (i.e. a projection) but even more sophisticated legacy support is possible. But of course, if I want to compile something that depends on a very old package, I just download that old package and compile with it, right?
If I write a program and test it against a specific version of a library then my program's source code and knowledge about which specific versions of libraries I used, most of the time, contains *all* the information necessary to determine which other library versions it can be built with.
Yes.
From the source code we need information about what is imported, from the library author we need a *formal* changelog. This changelog describes for each released version what part of the interface and semantics have changed.
I wouldn't necessarily choose the traditional changelog (i.e. diffs) as the concrete data format to specify compatibility information. I mean, that's an algorithmic choice already.
With more information (obtained mostly by tools) we can automate this process, and, in fact, both approaches can co-exist.
I am a bit reluctant concerning tools, in the following sense: either the information is "new" and cannot deduced automatically or it can always be deduced automatically. In the latter case, the human has already specified the information (although maybe implicit and scattered over different source files), hence the tool should be mandatory, i.e. not a tool at all.
This is kind of the same like using a "virtual package" that is simply a re-export of other packages. This would help a lot with our current problems with the base split (which will continue, as base will be split up even further).
Yes. I wouldn't make "virtual packages" a special case, though. In my eyes, packages/modules are just functions data a := b = a := b data Name = String data Value = .. -- represents haskell code type CompiledModule = Set (Name := Value) module foobar :: CompiledModule -> CompiledModule -> CompiledModule and both normal and virtual packages can be represented with the same model here. Regards, apfelmus