Tool to brute-force test against hackage libraries to determine lower bounds?

I don't know about you, but I personally haven't found the time to cast back in time for each of my package's dependencies to find a true lower bound version. Do we have any tools that would do the following? - ask Hackage for the available versions of package foo - use cabal-dev to build your package against foo-X.Y.Z forall {X,Y,Z} (but leaving other packages unconstrained) - report successes and failures, including last failure before the present version (and therefore lower bound, exclusive) Johan, would it make any sense to extend your Jenkins setup to do this? -Ryan

On Wed, Nov 9, 2011 at 3:58 PM, Ryan Newton
I don't know about you, but I personally haven't found the time to cast back in time for each of my package's dependencies to find a true lower bound version.
Do we have any tools that would do the following?
- ask Hackage for the available versions of package foo - use cabal-dev to build your package against foo-X.Y.Z forall {X,Y,Z} (but leaving other packages unconstrained) - report successes and failures, including last failure before the present version (and therefore lower bound, exclusive)
What about dependency interactions? If you depend on foo and bar there might be versions of foo and bar that don't build together that you might not discover by varying their versions independently.
Johan, would it make any sense to extend your Jenkins setup to do this?
If someone came up with a recipe, sure. It might be a bit CPU intensive for my little VPS though. -- Johan

What about dependency interactions? If you depend on foo and bar there might be versions of foo and bar that don't build together that you might not discover by varying their versions independently.
Indeed. But assuming for a moment that foo & bar have correctly specified their own dependency bounds won't the constraint solver make up for some of this deficiency? I.e. you specify too low a version for foo but the range gets further restricted by cabal's constraint solver and you end up ok? I proposed the greedy approach just because I think given current compile times it wouldn't be possible to try all combinations ;-). ** Though I suppose a decent heuristic would compute the total # of combinations and -- if it is manageable -- do them all. If not, either resort to greedy/independent testing or bring out the more complex strategies for sampling the version space... But enough idle speculation! I know people have studied this problem in earnest and I haven't read any of that. -Ryan ** P.S. If one could carefully control how the compiler output is managed I guess you could cut way down on the number of actual module compilations to explore a given set of combinations. (A particular module should only need to be compiled once for each unique combination of its own dependencies present in the set of combinations being examined, right?)
participants (2)
-
Johan Tibell
-
Ryan Newton