I'd like to add to the list: "change cabal-install so that most of its code is exported to hackage as a library". In the past I've occasionally wanted to do some of the same things the cabal binary does from code, and it would be much more convenient to link the relevant code in than to exec the binary all the time.


On Thu, Apr 24, 2014 at 11:53 AM, Johan Tibell <johan.tibell@gmail.com> wrote:
Hi all,

While I'm sure we still have a bugfix release or two to make on the 1.20 branch, I thought it'd be worth looking at what we want to accomplish for 1.22. Here are my thoughts on what we should focus on:

## A dependency solver that always works

As Hackage has grown so have the requirements of the dependency solver. There are three distinct problems I'm seeing now that we should tackle:

 * Treat each sections (i.e. library, test suite, benchmark, and executable) in the .cabal file separately for the purpose of dependency resolution. Today all the sections' dependencies are merged and used as the constraints of the package as a whole. This is troublesome for all packages that are dependencies of QC, HUnit, test-framework, and criterion, as there's a dependency cycle if you treat e.g. the containers package and its test suite as one unit.

   The solution here is to treat each unit as a mini package for the purpose of dependency resolution. This would also allow you to have e.g. several executables with conflicting dependencies.

 * Improve performance. Some packages (e.g. yesod) can take over 10 seconds to run over. This problem will get worse as Hackage grows and we build bigger applications on top of it, so we need to tackle this now before it becomes a real problem.

 * Fix Hackage package blacklisting. Users can blacklist packages on Hackage e.g. if they know them to be broken. However, this doesn't really work as the Hackage blacklist translates to a soft preference in the dependency solver and is thus often ignored. See https://github.com/haskell/cabal/issues/1792 for the gory details.

## Do the right thing automatically

This is a carry-over from the 1.20 goals, as we didn't make much progress here.

The focus here should be on avoiding manual steps the cabal could do
for the user.

 * Automatically install dependencies when needed. When `cabal build`
would fail due to a missing dependency, just install this dependency
instead of bugging the user to do it. This will probably have to be
limited to sandboxes where we can't break the user's system

 * GHCi support could be improved by rebinding :reload to rerun e.g.
preprocessors automatically. This would enable the users to develop
completely from within ghci (i.e. faster edit-save-type-error cycle).
We have most of what we need here (i.e. GHC macro support) but someone
needs to make the final change to generate a .ghci file to pass in the
ghci invocation.

## Faster builds

I think we're almost done here. There's really only one remaining thing to do:

 * Build components and different ways (e.g. profiling) in parallel.
We could build both profiling and non-profiling versions in parallel.
We could also build e.g. all test suites in parallel. The key
challenge here is to coordinate all parallel jobs so we don't spawn
too many.

## Support large projects

This is also a carry-over from the 1.20 goals.

We still don't have a good story for large projects. Sandboxes are too annoying to use if there are 100 add-source deps. We need more automation and more opinionated defaults (e.g. scan the sub-directories from in which cabal was run to find source packages.)

What we need most of all here is a design. Perhaps we could try to get together at some Hackathon/ICFP and discuss.

## Issue tracker spring cleaning and test suite improvements

The issue tracker is out-of-hand. It's too unwieldy to use for planning our work and get an overview of the most important issues. We should try to close down bugs that haven't had updates in years with extreme prejudice. If the issue is important it will pop up again.

We're also severely lacking in the testing department. There are two problems:

 * There aren't enough tests: the cabal user facing surface is quite larger (lots of features and flags) and many of them are not tested at all, which will lead to regressions as we keep fixing bugs and adding features.

 * The tests take too long to run: we have too many end-to-end style tests (i.e. build a whole package) and not enough unit style tests. We need to add more of the latter kind.

Cheers,
  Johan


_______________________________________________
cabal-devel mailing list
cabal-devel@haskell.org
http://www.haskell.org/mailman/listinfo/cabal-devel




--
Gregory Collins <greg@gregorycollins.net>