How to determine correct dependency versions for a library?

Recently I started developing a Haskell library and I have a question about package dependencies. Right now when I need my project to depend on some other package I only specify the package name in cabal file and don't bother with providing the package version. This works because I am the only user of my library but I am aware that if the library were to be released on Hackage I would have to supply version numbers in the dependencies. The question is how to determine proper version numbers? I can be conservative and assume that version of libraries in my system are the minimum required ones. This is of course not a good solution, because my library might work with earlier versions but I don't know a way to check that. What is the best way to determine a minimal version of a package required by my library? I also don't see any sane way of determining maximum allowed versions for the dependencies, but looking at other packages I see that this is mostly ignored and package maintainers only supply lower versions. Is this correct approach? Janek

What I usually do is start out with dependencies listed like:
aeson ==0.6.*
and then, as your dependencies evolve, you either bump the version number:
aeson ==0.7.*
or, if you're willing to support multiple version, switch to a range:
aeson >=0.6 && <= 0.7
If someone uses a previous version of a library, and wants your library to
support it too (and, preferably, it works out of the box), they'll send a
pull request.
That's what works for me. Maybe you could use it as a starting point to
find what works for you!
- Clark
On Fri, Nov 9, 2012 at 11:15 AM, Janek S.
Recently I started developing a Haskell library and I have a question about package dependencies. Right now when I need my project to depend on some other package I only specify the package name in cabal file and don't bother with providing the package version. This works because I am the only user of my library but I am aware that if the library were to be released on Hackage I would have to supply version numbers in the dependencies. The question is how to determine proper version numbers?
I can be conservative and assume that version of libraries in my system are the minimum required ones. This is of course not a good solution, because my library might work with earlier versions but I don't know a way to check that. What is the best way to determine a minimal version of a package required by my library?
I also don't see any sane way of determining maximum allowed versions for the dependencies, but looking at other packages I see that this is mostly ignored and package maintainers only supply lower versions. Is this correct approach?
Janek
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Thanks Clark! You're method seems good at first but I think I see a problem. So let's say you started with aeson 0.6. As new versions of aeson are released you introduce version ranges, but do you really have a method to determine that your package does indeed work with earlier versions? If you're upgrading aeson and don't have the older versions anymore you can only hope that the code changes you introduce don't break the dependency on earlier versions. Unless I am missing something? Janek Dnia piątek, 9 listopada 2012, Clark Gaebel napisał:
What I usually do is start out with dependencies listed like:
aeson ==0.6.*
and then, as your dependencies evolve, you either bump the version number:
aeson ==0.7.*
or, if you're willing to support multiple version, switch to a range:
aeson >=0.6 && <= 0.7
If someone uses a previous version of a library, and wants your library to support it too (and, preferably, it works out of the box), they'll send a pull request.
That's what works for me. Maybe you could use it as a starting point to find what works for you!
- Clark
On Fri, Nov 9, 2012 at 11:15 AM, Janek S.
wrote: Recently I started developing a Haskell library and I have a question about package dependencies. Right now when I need my project to depend on some other package I only specify the package name in cabal file and don't bother with providing the package version. This works because I am the only user of my library but I am aware that if the library were to be released on Hackage I would have to supply version numbers in the dependencies. The question is how to determine proper version numbers?
I can be conservative and assume that version of libraries in my system are the minimum required ones. This is of course not a good solution, because my library might work with earlier versions but I don't know a way to check that. What is the best way to determine a minimal version of a package required by my library?
I also don't see any sane way of determining maximum allowed versions for the dependencies, but looking at other packages I see that this is mostly ignored and package maintainers only supply lower versions. Is this correct approach?
Janek
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Just like if your C application depends on either SQLite 2 or SQLite 3,
you're going to need to test it with both before a release.
Hoping that your library works against a previous major revision is just
asking for trouble!
I usually just take the easy way out and switch to ==0.7.
On Fri, Nov 9, 2012 at 11:31 AM, Janek S.
Thanks Clark! You're method seems good at first but I think I see a problem. So let's say you started with aeson 0.6. As new versions of aeson are released you introduce version ranges, but do you really have a method to determine that your package does indeed work with earlier versions? If you're upgrading aeson and don't have the older versions anymore you can only hope that the code changes you introduce don't break the dependency on earlier versions. Unless I am missing something?
Janek
Dnia piątek, 9 listopada 2012, Clark Gaebel napisał:
What I usually do is start out with dependencies listed like:
aeson ==0.6.*
and then, as your dependencies evolve, you either bump the version number:
aeson ==0.7.*
or, if you're willing to support multiple version, switch to a range:
aeson >=0.6 && <= 0.7
If someone uses a previous version of a library, and wants your library to support it too (and, preferably, it works out of the box), they'll send a pull request.
That's what works for me. Maybe you could use it as a starting point to find what works for you!
- Clark
On Fri, Nov 9, 2012 at 11:15 AM, Janek S.
wrote: Recently I started developing a Haskell library and I have a question about package dependencies. Right now when I need my project to depend on some other package I only specify the package name in cabal file and don't bother with providing the package version. This works because I am the only user of my library but I am aware that if the library were to be released on Hackage I would have to supply version numbers in the dependencies. The question is how to determine proper version numbers?
I can be conservative and assume that version of libraries in my system are the minimum required ones. This is of course not a good solution, because my library might work with earlier versions but I don't know a way to check that. What is the best way to determine a minimal version of a package required by my library?
I also don't see any sane way of determining maximum allowed versions for the dependencies, but looking at other packages I see that this is mostly ignored and package maintainers only supply lower versions. Is this correct approach?
Janek
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I usually just take the easy way out and switch to ==0.7. I see. I guess I don't yet have enough experience in Haskell to anticipate how restrictive is such a choice.
Janek
On Fri, Nov 9, 2012 at 11:31 AM, Janek S.
wrote: Thanks Clark! You're method seems good at first but I think I see a problem. So let's say you started with aeson 0.6. As new versions of aeson are released you introduce version ranges, but do you really have a method to determine that your package does indeed work with earlier versions? If you're upgrading aeson and don't have the older versions anymore you can only hope that the code changes you introduce don't break the dependency on earlier versions. Unless I am missing something?
Janek
Dnia piątek, 9 listopada 2012, Clark Gaebel napisał:
What I usually do is start out with dependencies listed like:
aeson ==0.6.*
and then, as your dependencies evolve, you either bump the version
number:
aeson ==0.7.*
or, if you're willing to support multiple version, switch to a range:
aeson >=0.6 && <= 0.7
If someone uses a previous version of a library, and wants your library
to
support it too (and, preferably, it works out of the box), they'll send a pull request.
That's what works for me. Maybe you could use it as a starting point to find what works for you!
- Clark
On Fri, Nov 9, 2012 at 11:15 AM, Janek S.
wrote:
Recently I started developing a Haskell library and I have a question about package dependencies. Right now when I need my project to depend on some other package I only specify the package name in cabal file and don't bother with providing the package version. This works because I am the only user of my library but I am aware that if the library were to be released on Hackage I would have to supply version numbers in the dependencies. The question is how to determine proper version numbers?
I can be conservative and assume that version of libraries in my system are the minimum required ones. This is of course not a good solution, because my library might work with earlier versions but I don't know a way to check that. What is the best way to
determine a
minimal version of a package required by my library?
I also don't see any sane way of determining maximum allowed versions
for
the dependencies, but looking at other packages I see that this is mostly ignored and package maintainers only supply lower versions. Is this correct approach?
Janek
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

It's not restrictive. Anything that I put on Hackage is open source. If
someone finds that it works fine on a previous (or later) version, I accept
their patch with a constraint change, and re-release immediately. I just
don't like to claim that my package works with major versions of packages
that I haven't tested.
On Fri, Nov 9, 2012 at 12:36 PM, Janek S.
I usually just take the easy way out and switch to ==0.7. I see. I guess I don't yet have enough experience in Haskell to anticipate how restrictive is such a choice.
Janek
On Fri, Nov 9, 2012 at 11:31 AM, Janek S.
Thanks Clark! You're method seems good at first but I think I see a problem. So let's say you started with aeson 0.6. As new versions of aeson are released you introduce version ranges, but do you really have a method to determine that your package does indeed work with earlier versions? If you're upgrading aeson and don't have the older versions anymore you can only hope that the code changes you introduce don't break the dependency on earlier versions. Unless I am missing something?
Janek
Dnia piątek, 9 listopada 2012, Clark Gaebel napisał:
What I usually do is start out with dependencies listed like:
aeson ==0.6.*
and then, as your dependencies evolve, you either bump the version
number:
aeson ==0.7.*
or, if you're willing to support multiple version, switch to a range:
aeson >=0.6 && <= 0.7
If someone uses a previous version of a library, and wants your
wrote: library
to
support it too (and, preferably, it works out of the box), they'll
send
a pull request.
That's what works for me. Maybe you could use it as a starting point to find what works for you!
- Clark
On Fri, Nov 9, 2012 at 11:15 AM, Janek S.
wrote:
Recently I started developing a Haskell library and I have a
question
about package dependencies. Right now when I need my project to depend on some other package I only specify the package name in cabal file and don't bother with providing the package version. This works because I am the only user of my library but I am aware that if the library were to be released on Hackage I would have to supply version numbers in the dependencies. The question is how to determine proper version numbers?
I can be conservative and assume that version of libraries in my system are the minimum required ones. This is of course not a good solution, because my library might work with earlier versions but I don't know a way to check that. What is the best way to
determine a
minimal version of a package required by my library?
I also don't see any sane way of determining maximum allowed versions
for
the dependencies, but looking at other packages I see that this is mostly ignored and package maintainers only supply lower versions. Is this correct approach?
Janek
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hi Clark,
It's not restrictive.
how can you say that by adding a version restriction you don't restrict anything?
I just don't like to claim that my package works with major versions of packages that I haven't tested.
Why does it not bother you to claim that your package can *not* be built with all those versions that you excluded without testing whether those restrictions actually exist or not? Take care, Peter

I think we just use dependencies different things. This is a problem
inherent in cabal.
When I (and others) specify a dependency, I'm saying "My package will work
with these packages. I promise."
When you (and others) specify a dependency, you're saying "If you use a
version outside of these bounds, my package will break. I promise."
They're similar, but subtly different. There are merits to both of these
strategies, and it's unfortunate that this isn't specified in the PVP [1].
Janek: I've already given my method, and Peter has told you his method.
Pick either, or make your own! Who knows, maybe someone else (or you!) will
have an even better way to deal with this. :)
- Clark
[1] http://www.haskell.org/haskellwiki/Package_versioning_policy
On Fri, Nov 9, 2012 at 1:03 PM, Peter Simons
Hi Clark,
It's not restrictive.
how can you say that by adding a version restriction you don't restrict anything?
I just don't like to claim that my package works with major versions of packages that I haven't tested.
Why does it not bother you to claim that your package can *not* be built with all those versions that you excluded without testing whether those restrictions actually exist or not?
Take care, Peter
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hi Clark.
I think we just use dependencies [to specify] different things.
If dependency version constraints are specified as a white-list -- i.e. we include only those few versions that have been actually verified and exclude everything else --, then we take the risk of excluding *too much*. There will be versions of the dependencies that would work just fine with our package, but the Cabal file prevents them from being used in the build. The opposite approach is to specify constraints as a black-list. This means that we don't constrain our build inputs at all, unless we know for a fact that some specific versions cannot be used to build our package. In that case, we'll exclude exactly those versions, but nothing else. In this approach, we risk excluding *too little*. There will probably be versions of our dependencies that cannot be used to build our package, but the Cabal file doesn't exclude them from being used. Now, the black-list approach has a significant advantage. In current versions of "cabal-install", it is possible for users to extend an incomplete black-list by adding appropriate "--constraint" flags on the command-line of the build. It is impossible, however, to extend an incomplete white-list that way. In other words: build failures can be easily avoided if some package specifies constraints that are too loose. Build failures caused by version constraints that are too strict, however, can be fixed only by editing the Cabal file. For this reason, dependency constraints in Cabal should rather be underspecified than overspecified. Take care, Peter

Peter Simons
Now, the black-list approach has a significant advantage. In current versions of "cabal-install", it is possible for users to extend an incomplete black-list by adding appropriate "--constraint" flags on the command-line of the build. It is impossible, however, to extend an incomplete white-list that way.
In other words: build failures can be easily avoided if some package specifies constraints that are too loose. Build failures caused by version constraints that are too strict, however, can be fixed only by editing the Cabal file.
...but on the down side w.r.t. to white-listing: doesn't the black-list approach allow for a clean-environment 'cabal install package-with-many-transitive-deps' to suddenly break because a new major version of a dependee package was uploaded to Hackage which causes disturbances in the "equilibrium" due to API incompatibilities (even though the PVP was followed)? On the other hand, the white-list approach ensures reproducible builds (again, assuming the PVP is followed) in the sense that if there was a valid install-plan yesterday with a given GHC version, cabal-install will be able to find one today as well with the same GHC even though Hackage may have received new major versions. In other words, the white-list approach strives to conserve the invariant of sound package version interdependency constraints. That invariant is usually not the weakest possible one, but it converges against the "ideal invariant". Whereas the black-list approach doesn't provide any such invariant, it's always lagging behind, trying to catch up with the "ideal invariant" as well, but from the other direction. So IMHO, from a formal standpoint, the white-list approach seems more "correct", as it doesn't ever lead to an incompatible set of packages being compiled/linked against each other. cheers, hvr

Peter Simons
Hi Clark.
I think we just use dependencies [to specify] different things.
If dependency version constraints are specified as a white-list -- i.e. we include only those few versions that have been actually verified and exclude everything else --, then we take the risk of excluding *too much*. There will be versions of the dependencies that would work just fine with our package, but the Cabal file prevents them from being used in the build.
The opposite approach is to specify constraints as a black-list. This means that we don't constrain our build inputs at all, unless we know for a fact that some specific versions cannot be used to build our package. In that case, we'll exclude exactly those versions, but nothing else. In this approach, we risk excluding *too little*. There will probably be versions of our dependencies that cannot be used to build our package, but the Cabal file doesn't exclude them from being used.
Now, the black-list approach has a significant advantage. In current versions of "cabal-install", it is possible for users to extend an incomplete black-list by adding appropriate "--constraint" flags on the command-line of the build. It is impossible, however, to extend an incomplete white-list that way.
In other words: build failures can be easily avoided if some package specifies constraints that are too loose. Build failures caused by version constraints that are too strict, however, can be fixed only by editing the Cabal file.
For this reason, dependency constraints in Cabal should rather be underspecified than overspecified.
The blacklisting approach has one major disadvantage that noone has mentioned yet: Adding more restrictive constraints does not work, the broken package will be on hackage forever, while adding a new version with relaxed constraints works well. Consider the following example: A 1.1.4.0 build-depends: B ==2.5.* C ==3.7.* (overspecified) B 2.5.3.0 build-depends: C ==3.* (underspecified) C 3.7.1.0 Everything works nice until C-3.8.0.0 appears with incompatible changes that break B, but not A. Now both A and B have to update their dependencies and we have now: A 1.1.5.0 build-depends: B ==2.5.* C >=3.7 && <3.9 B 2.5.4.0 build-depends: C >=3 && <3.8 C 3.8.0.0 And now the following combination is still valid: A 1.1.5.0 B 2.5.3.0 (old version) C 3.8.0.0 Bang! Tobi PS: This is my first post on this list. I'm not actively using haskell, but following this list for quite a while just out of interest.

To prevent this, I think the PVP should specify that if dependencies get a
major version bump, the package itself should bump its major version
(preferably the B field).
Hopefully, in the future, cabal would make a distinction between packages *
used* within another package (such as a hashmap exclusively used to
de-duplicate elements in lists) and packages *needed for the public
API*(such as Data.Vector needed for aeson). That way, internal
packages can
update dependencies with impunity, and we still get the major version
number bump of packages needed for the public API.
- Clark
On Wed, Nov 14, 2012 at 2:20 PM, Tobias Müller
Peter Simons
wrote: Hi Clark.
I think we just use dependencies [to specify] different things.
If dependency version constraints are specified as a white-list -- i.e. we include only those few versions that have been actually verified and exclude everything else --, then we take the risk of excluding *too much*. There will be versions of the dependencies that would work just fine with our package, but the Cabal file prevents them from being used in the build.
The opposite approach is to specify constraints as a black-list. This means that we don't constrain our build inputs at all, unless we know for a fact that some specific versions cannot be used to build our package. In that case, we'll exclude exactly those versions, but nothing else. In this approach, we risk excluding *too little*. There will probably be versions of our dependencies that cannot be used to build our package, but the Cabal file doesn't exclude them from being used.
Now, the black-list approach has a significant advantage. In current versions of "cabal-install", it is possible for users to extend an incomplete black-list by adding appropriate "--constraint" flags on the command-line of the build. It is impossible, however, to extend an incomplete white-list that way.
In other words: build failures can be easily avoided if some package specifies constraints that are too loose. Build failures caused by version constraints that are too strict, however, can be fixed only by editing the Cabal file.
For this reason, dependency constraints in Cabal should rather be underspecified than overspecified.
The blacklisting approach has one major disadvantage that noone has mentioned yet: Adding more restrictive constraints does not work, the broken package will be on hackage forever, while adding a new version with relaxed constraints works well.
Consider the following example:
A 1.1.4.0 build-depends: B ==2.5.* C ==3.7.* (overspecified) B 2.5.3.0 build-depends: C ==3.* (underspecified) C 3.7.1.0
Everything works nice until C-3.8.0.0 appears with incompatible changes that break B, but not A.
Now both A and B have to update their dependencies and we have now:
A 1.1.5.0 build-depends: B ==2.5.* C >=3.7 && <3.9 B 2.5.4.0 build-depends: C >=3 && <3.8 C 3.8.0.0
And now the following combination is still valid: A 1.1.5.0 B 2.5.3.0 (old version) C 3.8.0.0 Bang!
Tobi
PS: This is my first post on this list. I'm not actively using haskell, but following this list for quite a while just out of interest.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Clark Gaebel
To prevent this, I think the PVP should specify that if dependencies get a major version bump, the package itself should bump its major version (preferably the B field).
No, it has nothing to do with major/minor version bumps. It's just that if you underspecify your dependencies, they may become invalid at some point and you cannot correct them. Overspecified dependencies will always remain correct. Your suggested solution entirely defeats the purpose of underspecified dependencies, namely that you _don't_ have to update your package every time that a dependency is updated. Also, note that in my toy scenario, the maintainer of package A now also has to check the dependencies of package B. He must recognize that B-2.5.3.0 has incorrect dependencies, and exclude this version from the dependencies of B. So just because the maintainer of B was too lazy, all projects that depend on B have to insert special cases for bad versions of B.
Hopefully, in the future, cabal would make a distinction between packages * used* within another package (such as a hashmap exclusively used to de-duplicate elements in lists) and packages *needed for the public API*(such as Data.Vector needed for aeson). That way, internal packages can update dependencies with impunity, and we still get the major version number bump of packages needed for the public API.
If you reexport an API, you should probably make very tight dependency restrictions, if not just one single version. For packages that are developed in parallel this is even more natural. What is the advantage if cabal recognizes that? What could it do differently that you cannot do already by just setting the appropriate dependencies? Tobi
Hi Clark.
I think we just use dependencies [to specify] different things.
If dependency version constraints are specified as a white-list -- i.e. we include only those few versions that have been actually verified and exclude everything else --, then we take the risk of excluding *too much*. There will be versions of the dependencies that would work just fine with our package, but the Cabal file prevents them from being used in the build.
The opposite approach is to specify constraints as a black-list. This means that we don't constrain our build inputs at all, unless we know for a fact that some specific versions cannot be used to build our package. In that case, we'll exclude exactly those versions, but nothing else. In this approach, we risk excluding *too little*. There will probably be versions of our dependencies that cannot be used to build our package, but the Cabal file doesn't exclude them from being used.
Now, the black-list approach has a significant advantage. In current versions of "cabal-install", it is possible for users to extend an incomplete black-list by adding appropriate "--constraint" flags on the command-line of the build. It is impossible, however, to extend an incomplete white-list that way.
In other words: build failures can be easily avoided if some package specifies constraints that are too loose. Build failures caused by version constraints that are too strict, however, can be fixed only by editing the Cabal file.
For this reason, dependency constraints in Cabal should rather be underspecified than overspecified.
The blacklisting approach has one major disadvantage that noone has mentioned yet: Adding more restrictive constraints does not work, the broken package will be on hackage forever, while adding a new version with relaxed constraints works well.
Consider the following example:
A 1.1.4.0 build-depends: B ==2.5.* C ==3.7.* (overspecified) B 2.5.3.0 build-depends: C ==3.* (underspecified) C 3.7.1.0
Everything works nice until C-3.8.0.0 appears with incompatible changes that break B, but not A.
Now both A and B have to update their dependencies and we have now:
A 1.1.5.0 build-depends: B ==2.5.* C >=3.7 && <3.9 B 2.5.4.0 build-depends: C >=3 && <3.8 C 3.8.0.0
And now the following combination is still valid: A 1.1.5.0 B 2.5.3.0 (old version) C 3.8.0.0 Bang!

On Wed, Nov 14, 2012 at 1:01 PM, Tobias Müller
Clark Gaebel
wrote: To prevent this, I think the PVP should specify that if dependencies get a major version bump, the package itself should bump its major version (preferably the B field).
No, it has nothing to do with major/minor version bumps. It's just that if you underspecify your dependencies, they may become invalid at some point and you cannot correct them. Overspecified dependencies will always remain correct.
This is required if you want to maintain the property that clients don't break. If A-1.0 dependes on B-1.0.* and C depends on both A-1.0.* and B-1.0.*. Bumping dependency in A on B to B-2.0.* without bumping the major version number of A will cause C to fail to compile as it now depends on both B-1.0.* (directly) and B-2.0.* (though A-1.0). -- Johan

Johan Tibell
On Wed, Nov 14, 2012 at 1:01 PM, Tobias Müller
wrote: Clark Gaebel
wrote: To prevent this, I think the PVP should specify that if dependencies get a major version bump, the package itself should bump its major version (preferably the B field). No, it has nothing to do with major/minor version bumps. It's just that if you underspecify your dependencies, they may become invalid at some point and you cannot correct them. Overspecified dependencies will always remain correct.
This is required if you want to maintain the property that clients don't break.
If A-1.0 dependes on B-1.0.* and C depends on both A-1.0.* and B-1.0.*. Bumping dependency in A on B to B-2.0.* without bumping the major version number of A will cause C to fail to compile as it now depends on both B-1.0.* (directly) and B-2.0.* (though A-1.0).
I think I misunderstood Clarks suggestion. I thought he was advocating underspecified dependencies like A-1.*, but now when I am rereading it, it's actually the opposite. His proposal would explicitely disallow such dependencies. But it would probably be even too restrictive, since it generally disallows dependencies covering more than one major version, even if all those packages are already available and tested to be compatible. Also it only applies to the PVP. The distinction between blacklisting and whitelisting is more general and applies to all possible versioning schemes. Tobi

The blacklisting approach has one major disadvantage that noone has mentioned yet: Adding more restrictive constraints does not work, the broken package will be on hackage forever, while adding a new version with relaxed constraints works well.
That illustrate real problem It's not possible to specify correct version constraints when package is uploaded. So one have to choose between optimistic and conservative approach. Both have disadvantages. In ideal world one need ability to adjust version bounds after package upload.

Aleksey Khudyakov
Adding more restrictive constraints does not work, the broken package will be on hackage forever, while adding a new version with relaxed constraints works well.
That illustrate real problem It's not possible to specify correct version constraints when package is uploaded. So one have to choose between optimistic and conservative approach. Both have disadvantages. In ideal world one need ability to adjust version bounds after package upload.
+1. Metadata (i.e. the cabal file) chould be editable. In addition to adjusting dependency bounds, Hackage (or third party build servers) could add Tested-With, and there could be a deprecated field added when problems are discovered. -k

Ketil Malde
Aleksey Khudyakov
writes: Adding more restrictive constraints does not work, the broken package will be on hackage forever, while adding a new version with relaxed constraints works well.
That illustrate real problem It's not possible to specify correct version constraints when package is uploaded. So one have to choose between optimistic and conservative approach. Both have disadvantages. In ideal world one need ability to adjust version bounds after package upload.
+1. Metadata (i.e. the cabal file) chould be editable. In addition to adjusting dependency bounds, Hackage (or third party build servers) could add Tested-With, and there could be a deprecated field added when problems are discovered.
The last version component (D in A.B.C.D) could be considered the metadata version. Then cabal could be changed to only take into account the highest D-values. Alternatively, if the semantics of D cannot be changed, make it A.B.C.D_M (M for metadata version). That's the way MacPorts handles the problem. Tobi

Hi Tobias,
A 1.1.4.0 build-depends: B ==2.5.* C ==3.7.* (overspecified) B 2.5.3.0 build-depends: C ==3.* (underspecified) C 3.7.1.0
Everything works nice until C-3.8.0.0 appears with incompatible changes that break B, but not A.
Now both A and B have to update their dependencies and we have now:
A 1.1.5.0 build-depends: B ==2.5.* C >=3.7 && <3.9 B 2.5.4.0 build-depends: C >=3 && <3.8 C 3.8.0.0
And now the following combination is still valid: A 1.1.5.0 B 2.5.3.0 (old version) C 3.8.0.0 Bang!
thank you for contributing this insightful example. When such a situation has arisen in the past, it's my experience that the author of B typically releases an update to fix the issue with the latest version of C: B 2.5.4.0 build-depends: C >= 3.8 So that particular conflict does hardly ever occur in practice. Note that package A would build just fine after that update of B -- if the author of A hadn't overspecified its dependencies. As it is, however, a new version of A has to released that changes no code, but only the Cabal file. Take care, Peter

Peter Simons
Hi Tobias,
A 1.1.4.0 build-depends: B ==2.5.* C ==3.7.* (overspecified) B 2.5.3.0 build-depends: C ==3.* (underspecified) C 3.7.1.0
Everything works nice until C-3.8.0.0 appears with incompatible changes that break B, but not A.
Now both A and B have to update their dependencies and we have now:
A 1.1.5.0 build-depends: B ==2.5.* C >=3.7 && <3.9 B 2.5.4.0 build-depends: C >=3 && <3.8 C 3.8.0.0
And now the following combination is still valid: A 1.1.5.0 B 2.5.3.0 (old version) C 3.8.0.0 Bang!
thank you for contributing this insightful example.
When such a situation has arisen in the past, it's my experience that the author of B typically releases an update to fix the issue with the latest version of C:
B 2.5.4.0 build-depends: C >= 3.8
So that particular conflict does hardly ever occur in practice.
And what if the maintainer of a takes the chance to make some major updates and directly releases 2.6? Then all packages depending on 2.5.* will probably break.
Note that package A would build just fine after that update of B -- if the author of A hadn't overspecified its dependencies. As it is, however, a new version of A has to released that changes no code, but only the Cabal file.
But all this boils down to a system where only a combination of latest versions will be stable. So why restrict dependencies anyway? Tobi

Hi Tobias,
When such a situation has arisen in the past, it's my experience that the author of B typically releases an update to fix the issue with the latest version of C:
B 2.5.4.0 build-depends: C >= 3.8
So that particular conflict does hardly ever occur in practice.
And what if the maintainer of a takes the chance to make some major updates and directly releases 2.6? Then all packages depending on 2.5.* will probably break.
yes, that is true. In such a case, one would have to contact the maintainer of A, B, and C to discuss how to remedy the issue. Fortunately, pathological cases such as this one seem to happen rarely in practice.
All this boils down to a system where only a combination of latest versions will be stable. So why restrict dependencies anyway?
Now, I think that is an exaggeration. Do you know a single example of a package on Hackage that actually suffers from the problem you're describing? Take care, Peter

On 09/11/2012 18:35, Clark Gaebel wrote:
I think we just use dependencies different things. This is a problem inherent in cabal.
When I (and others) specify a dependency, I'm saying "My package will work with these packages. I promise." When you (and others) specify a dependency, you're saying "If you use a version outside of these bounds, my package will break. I promise."
They're similar, but subtly different. There are merits to both of these strategies, and it's unfortunate that this isn't specified in the PVP [1].
I always understood that the policy was the former, i.e. allowing a version means you positively expect it to work. Otherwise, why does the PVP insist on upper bounds? You can't in general know that the package will break with a version that doesn't exist yet. As this thread and others show, there is of course a substantial set of people that would prefer the policy to be the latter. Cheers, Ganesh

Replying somewhere random in the thread. Linux distributions have to solve this same problem. We first need to decide what Hackage's function is supposed to be: (1) A dumb repository to host Haskell code (2) A collection of Haskell packages that work together In reality it's (1), but the existence of cabal-install supposes (2). The way that distributions handle this is to assign maintainers to each and every package in the distro, and require them all to be actively maintained. Packages that don't build or have unresponsive upstreams are removed. Maintainers who don't do their jobs are removed after a while, too. The end result is that there are somewhat fewer packages /visible/ to the user, but a comparable amount /available/, since what's in the distro is what actually would have worked if the user tried to build it himself. I think there's value to having (1), but that we shouldn't expect (2) at the same time. Tons of work goes into QA'ing packages to work together. Someone needs to be responsible for making sure that things work; right now no one is, so they don't. Running a "Haskell distro" on a parallel Hackage would be a lot of work but Arch, Debian, Gentoo, etc. already have to do essentially that. It may make sense to either consolidate the effort, or reuse what work is already being done. The gentoo-haskell[1] project already keeps a list of packages that are known to build from source together. Something like prefix[2] for example could be used to install packages that have been vetted rather than just pulling a tarball with the right name directly from Hackage. Or if that's too much trouble, we could write a reverse hackport[3] that creates a second Hackage full of stuff known to work in the various distributions. No solution will be great at first, but we have the "benefit" that it doesn't work right now either so maybe nobody will notice. Without everyone duplicating the QA effort, I think things would shape up quickly. [1] https://github.com/gentoo-haskell/gentoo-haskell [2] http://www.gentoo.org/proj/en/gentoo-alt/prefix/ [3] http://hackage.haskell.org/package/hackport

* Janek S.
but I am aware that if the library were to be released on Hackage I would have to supply version numbers in the dependencies. The question is how to determine proper version numbers?
With the current state of affairs, your best bet is not to specify any version bounds, or specify only lower ones. Upper version bounds much more often break things that fix things. In future, when we develop better tools, this will hopefully change. When uploading to hackage, you'll be only required to give bounds for base, but they also may be very lax (like base ==4.*). Roman

On Fri, Nov 9, 2012 at 5:52 PM, Roman Cheplyaka
* Janek S.
[2012-11-09 17:15:26+0100] but I am aware that if the library were to be released on Hackage I would have to supply version numbers in the dependencies. The question is how to determine proper version numbers?
With the current state of affairs, your best bet is not to specify any version bounds, or specify only lower ones.
Upper version bounds much more often break things that fix things.
I'd like to ask people not to do this. What you're doing then is moving the burden from the maintainer (who has to test with new versions and relax dependencies) to the users of the library (who will be faced with breakages when new incompatible dependencies are released). Erik

* Erik Hesselink
On Fri, Nov 9, 2012 at 5:52 PM, Roman Cheplyaka
wrote: * Janek S.
[2012-11-09 17:15:26+0100] but I am aware that if the library were to be released on Hackage I would have to supply version numbers in the dependencies. The question is how to determine proper version numbers?
With the current state of affairs, your best bet is not to specify any version bounds, or specify only lower ones.
Upper version bounds much more often break things that fix things.
I'd like to ask people not to do this. What you're doing then is moving the burden from the maintainer (who has to test with new versions and relax dependencies) to the users of the library (who will be faced with breakages when new incompatible dependencies are released).
The trouble is, when things break, they break either way — so I simply propose to reduce the probability of things breaking. I know, I know, the theory is that Cabal magically installs the right versions for you. However, in practice it often can't do that due to various reasons (base libraries, package reinstalls etc.) I'm not trying to shift the burden from the maintainer — I simply know from experience that maintainers are not nearly as quick and responsible as would be required for such scheme to work, so as a *user* I'd prefer that they omit upper bounds — at least until the --ignore-constraints option is implemented. (https://github.com/haskell/cabal/issues/949) Roman

tl;dr: Breakages without upper bounds are annoying and hard to solve for
package consumers. With upper bounds, and especially with sandboxes,
breakage is almost non-existent.
I don't see how things break with upper bounds, at least in the presence of
sandboxes. If all packages involved follow the PVP, a build that worked
once, will always work. Cabal 0.10 and older had problems here, but 0.14
and later will always find a solution to the dependencies if there is one
(if you set max-backjumps high enough).
Without sandboxes, cabal might have to reinstall a package with different
dependencies, breaking other packages. It will currently warn against this.
Future versions will hopefully tell you about the sandboxing features that
can also be used to avoid this.
In contrast, without upper bounds a packages is almost sure to fail to
compile at some point. A user will then get compile errors outside his own
code, somewhere in the middle of his dependency chain. Depending on his
expertise, this will be either hard or impossible to fix. In particular, it
is not clear that too lenient dependencies are the problem, and if it is
clear, you do not know which ones.
So I still see this as a tradeoff between the interests of package
developers/maintainers (upper bounds give more testing/release work, and
also, they want to use the latest versions) versus package users/companies
(who want a build to work, but often don't care about the bleeding edge).
Personnaly, I fall in both groups, and have experienced both problems. At
Silk, we have (internal) packages with huge dependency chains (we depend on
both happstack and snap). With cabal 0.10, this was a nightmare. Since
0.14, we've had no breakages, except from packages that do not specify
upper bounds! We're fairly up-to-date with GHC versions: we're on 7.4 now,
but with no immediate plans to switch to 7.6. Switching to a new GHC
version is a bit of work, but we can decide when to do the work. Without
upper bounds, our builds can break at any moment, and we have to fix it
then and there to be able to continue working.
If you do have to use the bleeding edge (or a packages uses really outdated
dependencies) you can also use sandboxes to your advantage. Just 'cabal
unpack' the problematic package, change the dependencies and add the
resulting source package to your sandbox. This is what we do when we test
out a new GHC version. We also try to contribute fixes back upstream.
This is why I ask for people to specify upper bounds. They mean that
packages keep working, and they prevent users from getting
incomprehensible, badly timed build failures.
Erik
On Sat, Nov 10, 2012 at 5:16 PM, Roman Cheplyaka
On Fri, Nov 9, 2012 at 5:52 PM, Roman Cheplyaka
wrote: * Janek S.
[2012-11-09 17:15:26+0100] but I am aware that if the library were to be released on Hackage I would have to supply version numbers in the dependencies. The question is how to determine proper version numbers?
With the current state of affairs, your best bet is not to specify any version bounds, or specify only lower ones.
Upper version bounds much more often break things that fix things.
I'd like to ask people not to do this. What you're doing then is moving
* Erik Hesselink
[2012-11-10 16:40:30+0100] the burden from the maintainer (who has to test with new versions and relax dependencies) to the users of the library (who will be faced with breakages when new incompatible dependencies are released).
The trouble is, when things break, they break either way — so I simply propose to reduce the probability of things breaking.
I know, I know, the theory is that Cabal magically installs the right versions for you. However, in practice it often can't do that due to various reasons (base libraries, package reinstalls etc.)
I'm not trying to shift the burden from the maintainer — I simply know from experience that maintainers are not nearly as quick and responsible as would be required for such scheme to work, so as a *user* I'd prefer that they omit upper bounds — at least until the --ignore-constraints option is implemented. (https://github.com/haskell/cabal/issues/949)
Roman

On Mon, Nov 12, 2012 at 1:06 AM, Erik Hesselink
tl;dr: Breakages without upper bounds are annoying and hard to solve for package consumers. With upper bounds, and especially with sandboxes, breakage is almost non-existent.
I don't see how things break with upper bounds, at least in the presence of sandboxes. If all packages involved follow the PVP, a build that worked once, will always work. Cabal 0.10 and older had problems here, but 0.14 and later will always find a solution to the dependencies if there is one (if you set max-backjumps high enough).
The "breakage" people are talking about with regards to upper bounds is that every time a new version of a dependency comes out, packages with upper bounds can't compile with it, even if they would without the upper bound. For example, the version number of base is bumped with almost every GHC release, yet almost no packages would actually break to the changes that caused that major version number to go up.

On Mon, Nov 12, 2012 at 5:13 PM, Johan Tibell
On Mon, Nov 12, 2012 at 1:06 AM, Erik Hesselink
wrote: tl;dr: Breakages without upper bounds are annoying and hard to solve for package consumers. With upper bounds, and especially with sandboxes, breakage is almost non-existent.
I don't see how things break with upper bounds, at least in the presence of sandboxes. If all packages involved follow the PVP, a build that worked once, will always work. Cabal 0.10 and older had problems here, but 0.14 and later will always find a solution to the dependencies if there is one (if you set max-backjumps high enough).
The "breakage" people are talking about with regards to upper bounds is that every time a new version of a dependency comes out, packages with upper bounds can't compile with it, even if they would without the upper bound. For example, the version number of base is bumped with almost every GHC release, yet almost no packages would actually break to the changes that caused that major version number to go up.
Yes, this is why I talk about living on the bleeding edge, and shifting the burden from package maintainers to package users. And I believe the last base changes included a change to 'catch' which would have broken a lot of packages. The Num changes also caused a lot of code changes, and there were probably more I don't remember. Erik

Especially in the case of base, not sure how upper bounds help at all: If incompatible, you break with or without upper bounds. Actually getting errors related to Num instances is more informative IMO. If compatible, you just get false negatives and errors. In either case cabal can't install an older base to make the build work, so what do you gain?

* Erik Hesselink
And I believe the last base changes included a change to 'catch' which would have broken a lot of packages. The Num changes also caused a lot of code changes, and there were probably more I don't remember.
Even if so, upper bounds don't prevent these errors. Cabal can't install an older base for you. (I'm aware that GHC once shipped two versions of base, and dependency bounds were actually useful then. But that's not the case nowadays, as we see.) For example, here's what I get when I try to install virthualenv with GHC 7.6.1: % cabal install virthualenv Resolving dependencies... cabal: Could not resolve dependencies: trying: virthualenv-0.2.1 rejecting: base-3.0.3.2, 3.0.3.1 (global constraint requires installed instance) rejecting: base-4.6.0.0/installed-eac... (conflict: virthualenv => base>=4.2.0.0 && <4.6) rejecting: base-4.6.0.0, 4.5.1.0, 4.5.0.0, 4.4.1.0, 4.4.0.0, 4.3.1.0, 4.3.0.0, 4.2.0.2, 4.2.0.1, 4.2.0.0, 4.1.0.0, 4.0.0.0 (global constraint requires installed instance) Roman

On 14 November 2012 20:35, Roman Cheplyaka
* Erik Hesselink
[2012-11-12 20:58:17+0100] And I believe the last base changes included a change to 'catch' which would have broken a lot of packages. The Num changes also caused a lot of code changes, and there were probably more I don't remember.
Even if so, upper bounds don't prevent these errors. Cabal can't install an older base for you.
(I'm aware that GHC once shipped two versions of base, and dependency bounds were actually useful then. But that's not the case nowadays, as we see.)
For example, here's what I get when I try to install virthualenv with GHC 7.6.1:
% cabal install virthualenv Resolving dependencies... cabal: Could not resolve dependencies: trying: virthualenv-0.2.1 rejecting: base-3.0.3.2, 3.0.3.1 (global constraint requires installed instance) rejecting: base-4.6.0.0/installed-eac... (conflict: virthualenv => base>=4.2.0.0 && <4.6) rejecting: base-4.6.0.0, 4.5.1.0, 4.5.0.0, 4.4.1.0, 4.4.0.0, 4.3.1.0, 4.3.0.0, 4.2.0.2, 4.2.0.1, 4.2.0.0, 4.1.0.0, 4.0.0.0 (global constraint requires installed instance)
Doesn't this prevent the error of "this package won't build" (even if the error message doesn't precisely say that)?
Roman
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com http://IvanMiljenovic.wordpress.com

* Ivan Lazar Miljenovic
Doesn't this prevent the error of "this package won't build" (even if the error message doesn't precisely say that)?
Yeah, it replaces one error with another. How is it supposed to help me if I really want to build this package? Instead of fixing just the code, I now have to fix the cabal file as well! Roman

On Wed, Nov 14, 2012 at 10:57 AM, Roman Cheplyaka
* Ivan Lazar Miljenovic
[2012-11-14 20:53:23+1100] Doesn't this prevent the error of "this package won't build" (even if the error message doesn't precisely say that)?
Yeah, it replaces one error with another. How is it supposed to help me if I really want to build this package? Instead of fixing just the code, I now have to fix the cabal file as well!
The error might be clearer, since it comes up right away, and points you to the right package, together with the reason (doesn't support the right base version). If it started to build instead, it might fail in the middle, with some error that you might not know is caused by changes in base. But the question comes down to numbers: how often do packages break with new base versions, how soon do people need to be able to use the new GHC without changing other packages, etc. Some might argue that packages 'usually' work, so we should leave out upper bounds, even if it gives worse errors. Others say the errors are so bad, or badly timed, that we should have upper bounds, and the work for maintainers, while greater, is not too large. I know what I think, but nobody has concrete numbers about breakages with new base versions, amount of time spent updating packages, unupdated packages etc. Some can be found with a grep over the hackage tarball, but most can't. Erik

On 14 November 2012 07:51, Erik Hesselink
On Wed, Nov 14, 2012 at 10:57 AM, Roman Cheplyaka
wrote: * Ivan Lazar Miljenovic
[2012-11-14 20:53:23+1100] Doesn't this prevent the error of "this package won't build" (even if the error message doesn't precisely say that)?
Yeah, it replaces one error with another. How is it supposed to help me if I really want to build this package? Instead of fixing just the code, I now have to fix the cabal file as well!
The error might be clearer, since it comes up right away, and points you to the right package, together with the reason (doesn't support the right base version).
If it started to build instead, it might fail in the middle, with some error that you might not know is caused by changes in base.
But the question comes down to numbers: how often do packages break with new base versions, how soon do people need to be able to use the new GHC without changing other packages, etc. Some might argue that packages 'usually' work, so we should leave out upper bounds, even if it gives worse errors. Others say the errors are so bad, or badly timed, that we should have upper bounds, and the work for maintainers, while greater, is not too large. I know what I think, but nobody has concrete numbers about breakages with new base versions, amount of time spent updating packages, unupdated packages etc. Some can be found with a grep over the hackage tarball, but most can't.
This particular problem has a better solution - try building, and if it fails, print out the original cabal error after the compiler error. Alexander
Erik
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 11/14/2012 09:53 AM, Ivan Lazar Miljenovic wrote:
% cabal install virthualenv Resolving dependencies... cabal: Could not resolve dependencies: trying: virthualenv-0.2.1 rejecting: base-3.0.3.2, 3.0.3.1 (global constraint requires installed instance) rejecting: base-4.6.0.0/installed-eac... (conflict: virthualenv => base>=4.2.0.0 && <4.6) rejecting: base-4.6.0.0, 4.5.1.0, 4.5.0.0, 4.4.1.0, 4.4.0.0, 4.3.1.0, 4.3.0.0, 4.2.0.2, 4.2.0.1, 4.2.0.0, 4.1.0.0, 4.0.0.0 (global constraint requires installed instance) Doesn't this prevent the error of "this package won't build" (even if the error message doesn't precisely say that)? In most cases, it replaces the uncertainty of a build error with an unconditional pre-build error.
instead of having either a package built or a meaningful build error, i now have no package with the information that it might or might not cause an error to continue. -- Vincent

Hi Janek,
How to determine proper version numbers?
if you know for a fact that your package works only with specific versions of its dependencies, then constrain the build to exactly those versions that you know to work. If *don't* know of any such limitations, then *don't* specify any constraints. Take care, Peter
participants (16)
-
Aleksey Khudyakov
-
Alexander Kjeldaas
-
Clark Gaebel
-
Erik Hesselink
-
eyal.lotem@gmail.com
-
Ganesh Sittampalam
-
Herbert Valerio Riedel
-
Ivan Lazar Miljenovic
-
Janek S.
-
Johan Tibell
-
Ketil Malde
-
Michael Orlitzky
-
Peter Simons
-
Roman Cheplyaka
-
Tobias Müller
-
Vincent Hanquez