
I'm still thinking about how to manage splitting a big Cabal package into an easily portable core package and sub-packages for special features, while still presenting everything as a single project. People have proposed using the CPP preprocessor, but I want to keep this out of my Haskell modules. The radical way is splitting a big Cabal package into smaller ones. But then the Haddock documentation is also split (namely the contents and index file), the sub-packages must be downloaded and compiled separately, in the right order (I know that cabal-get will simplify that). Further on the project will be shipped by Cabal in separate archives, and I guess I have to duplicate the directory structure for my project (module A.B.C goes to sub-package X and module A.B.D goes to sub-package Y) and I have complicated recompilations after changes in the core package. Even more I always have to install the recompiled core before I can access it from a sub-package. I tried to solve the problem by composing a user dependent Cabal file from small Cabal files in the configure phase. That is I divide the big Cabal file into small ones for each sub-package. Then 'Setup.lhs configure' is implemented that way, that it finds out the dependencies of the sub-packages and configures them in the right order. If one configuration fails, the sub-package and all its dependents are excluded. I merge the successfully configured sub-packages into a big Cabal file. This method let me still handle the project as one unit, the Haddock documentation is merged and no intermediate installations of packages are necessary on recompilation. However this technique is not optimal because foreign packages may depend on special features provided by sub-packages which are not installed on a particular machine. To sum it up, what I need is: Things that shall remain together: Haddock documentation Files for distributed source archive Things that shall be split: package identifiers for special features Don't bother: compiled files Any new ideas?