finding the dependecies of cabal packages

Hi all, Over in Gentoo packaging land we're trying to build tools to automate the process of packaging cabal packages. ("package" is far to much of an overlaoded word!) So we have a tool "cabal2ebuild" which will produce an ebuild from a cabal file. An ebuild is a text file that contains all the information that the Gentoo packaging system needs to be able to install the program/library in question. In particular the ebuild must list all the dependencies of the package. Required Haskell libraries are listed in the "build-depends:" field in the cabal file, so that one is easy. However it is not clear how to find out the build tools that the cabal package requires. For example packages may require alex or happy or c2hs or other build time tools. Is there any better way to find if these tools are needed than to look through the source tree for any files with the appropriate extensions ".y", ".x", ".chs" etc? Duncan

On Wed, 2005-07-27 at 21:35 +0100, Duncan Coutts wrote:
In particular the ebuild must list all the dependencies of the package. Required Haskell libraries are listed in the "build-depends:" field in the cabal file, so that one is easy. However it is not clear how to find out the build tools that the cabal package requires.
For example packages may require alex or happy or c2hs or other build time tools. Is there any better way to find if these tools are needed than to look through the source tree for any files with the appropriate extensions ".y", ".x", ".chs" etc?
And a similar follow up question is how do cabal packages specify dependencies on particular versions of required tools? For example they might depend on some feature in the latest version of a tool. Duncan

On 7/27/05, Duncan Coutts
In particular the ebuild must list all the dependencies of the package. Required Haskell libraries are listed in the "build-depends:" field in the cabal file, so that one is easy. However it is not clear how to find out the build tools that the cabal package requires.
First of all, you would have to parse the Setup.lhs/Setup.hs file to determine which build infrastructure is being used, whether any hooks are being used, etc. Assuming that the build infrastructure is "Simple" and there are no hooks being used, then you have choices: (1) You could build a compiled version using GHC and the tools that come with it, (2) You can build an interpreted version using Hugs and the tools that come with it. However, I think there are going to be a lot of cases where the application can only be built with GHC. You would have to parse each file for {-# OPTIONS #-} and {-# OPTIONS_GHC #-} In short, the cabal package description is not the only information you need to fully describe how to successfully build an application. - Brian

On Wed, 2005-07-27 at 16:10 -0500, Brian Smith wrote:
On 7/27/05, Duncan Coutts
wrote: In particular the ebuild must list all the dependencies of the package. Required Haskell libraries are listed in the "build-depends:" field in the cabal file, so that one is easy. However it is not clear how to find out the build tools that the cabal package requires.
First of all, you would have to parse the Setup.lhs/Setup.hs file to determine which build infrastructure is being used, whether any hooks are being used, etc. Assuming that the build infrastructure is "Simple" and there are no hooks being used, then you have choices:
(1) You could build a compiled version using GHC and the tools that come with it, (2) You can build an interpreted version using Hugs and the tools that come with it.
However, I think there are going to be a lot of cases where the application can only be built with GHC. You would have to parse each file for {-# OPTIONS #-} and {-# OPTIONS_GHC #-}
In short, the cabal package description is not the only information you need to fully describe how to successfully build an application.
If we hope to have cabal packages properly supported by packaging systems (which I believe we do) then this sort of information is essential. And I think that it is important that it is discoverable automatically. If most packages in the hackage collection require manual fiddling and fixing then it will not be practical to mirror the hackage collection in normal distro packaging systems. If cabal packages can be adapted to 'native' packages easily (be that windows MSI installers, .deb .rpm etc) then it will allow Haskell libraries/programs to be distributed much more easily and to a wider audience. I think this aspect of Cabal has not got enough attention yet. Perhaps the people who package haskell programs/libs for the major systems (Debian, Gentoo, Fedora, MacOS X, FreeBSD, Windows) should get together and think about our requirements. Duncan

On Wed, 2005-07-27 at 22:32 +0100, Duncan Coutts wrote:
If cabal packages can be adapted to 'native' packages easily (be that windows MSI installers, .deb .rpm etc) then it will allow Haskell libraries/programs to be distributed much more easily and to a wider audience.
I agree.
I think this aspect of Cabal has not got enough attention yet. Perhaps the people who package haskell programs/libs for the major systems (Debian, Gentoo, Fedora, MacOS X, FreeBSD, Windows) should get together and think about our requirements.
I've been using autoconf and automake for buddha, which though ugly at times, provides a nice path to making packages for various unixy systems (generally I think because this is the standard GNU way of doing things). However, I haven't been able to migrate this over to cabal. One thing that is not clear in my mind is where cabal ends and autotools begin. There seems to be some overlap. Personally, I would love to throw away all the autotools stuff, but I'm not sure if I can easily replicate everything in cabal alone. Is it a goal of cabal to be able to avoid autotools? Bernie.

On Thu, Jul 28, 2005 at 11:53:23AM +1000, Bernard Pope wrote:
I've been using autoconf and automake for buddha, which though ugly at times, provides a nice path to making packages for various unixy systems (generally I think because this is the standard GNU way of doing things).
However, I haven't been able to migrate this over to cabal. One thing that is not clear in my mind is where cabal ends and autotools begin. There seems to be some overlap. Personally, I would love to throw away all the autotools stuff, but I'm not sure if I can easily replicate everything in cabal alone.
Is it a goal of cabal to be able to avoid autotools?
Avoiding Makefiles is certainly a goal. If you're interfacing to C, autoconf is a convenient bundle of fiddly special cases and knowledge about lots of systems. It seems pointless to replicate that. But if it's just Haskell, finding all the Haskell tools and using those to build executables and libraries, Cabal should do that by itself. There's currently no way to specify constraints on versions of the tools, though.

On 7/28/05, ross@soi.city.ac.uk
On Thu, Jul 28, 2005 at 11:53:23AM +1000, Bernard Pope wrote:
I've been using autoconf and automake for buddha, which though ugly at times, provides a nice path to making packages for various unixy systems (generally I think because this is the standard GNU way of doing things).
However, I haven't been able to migrate this over to cabal. One thing that is not clear in my mind is where cabal ends and autotools begin. There seems to be some overlap. Personally, I would love to throw away all the autotools stuff, but I'm not sure if I can easily replicate everything in cabal alone.
Is it a goal of cabal to be able to avoid autotools?
It seems to be. The Visual Haskell Studio.NET project uses _just_ the Cabal package description to build projects. So, if your Cabal project requires autotools then it won't work in that environment. In general, it seems like at least the GHC and Hugs teams want to support Windows users that don't have MinGW/MSYS or Cygwin (which means that they don't have autotools).
But if it's just Haskell, finding all the Haskell tools and using those to build executables and libraries, Cabal should do that by itself. There's currently no way to specify constraints on versions of the tools, though.
I agree that not all of the information belongs in the package description file. For example, if I only have one module that requires undecidable instances, I don't want to have Extensions: UndecidableInstances in my project file, because then everything gets built with overlapping instances turned on. Instead, I would {#- LANGUAGE UndecidableInstances -#} in one source file (assuming that it is fixed to work with GHC). Potentially, we could build a tool that can generate a list of dependencies that a project has, like this: Required-Extensions: OverlappingInstances, MultiParamTypeClasses, CPP Required-Tools: Happy, Alex, HC, GCC But, this wouldn't tell you which versions of the tools that you needed (or in the case of HC, which compiler/interpreter and which version). But, if we had a mapping like this: Hugs-1.2.3: OverlappingInstances, MultiParamTypeClasses, .... GHC-6.2.2: CPP, OverlappingInstances, MultiParamTypeClasses, .... GHC-6.4.1: GADT, CPP, OverlappingInstances, MultiParamTypeClasses, .... Then, new automatic tool could say: You cannot build this package because it requires language extension(s) that are unsupported by your installed Haskell implementation(s): Hugs-1.2.3 does not support CPP, GADT GHC-6.2.2 does not support GADT and You cannot build this package because it requires the following tool(s): Happy: (needed for src/Some/Module/Grammar.ly, ...) Alex: (needed for src/Some/Module/Lexer.x, ...) HSC2C: (needed for ...) For determining which version of a tool is required, a package author could also annotate their package description like this: Supported-Configuration: GHC >= 6.4, Happy >= 1.13, Alex >= 2.0 Supported-Configuration: Hugs >= 1.2.3, Happy >= 1.14, Alex >= 2.0.1 Tested-Configuration: GHC-6.4.1, Happy-1.13, Alex-2.0, base-1.0, HUnit-1.1, OpenGL-2.0 Tested-Configuration: GHC-6.4, Happy-1.14, Alex-2.0.1, base-1.0, HUnit-1.1, OpenGL-2.0 Tested-Configuration: Hugs-1.2.3, Happy-1.14, Alex-2.0.1, base-1.0, HUnit-1.1, OpenGL-2.0 Where, "Supported-Configuration" means "It is supposed to work with JUST these tools, plus that packages in Build-Depends" and "Tested-Configuration" means "I assert that it does in fact work with exactly these versions of these tools and packages." The new, nonexistant tool mentioned above could help automate the generation of Supported-Configuration and Tested-Configuration entries. Also, it would be a valuable tool for processing bug reports. Would this be enough information for Gentoo/Debian/RPM? - Brian

On Thu, Jul 28, 2005 at 02:07:53PM -0500, Brian Smith wrote:
On Thu, Jul 28, 2005 at 11:53:23AM +1000, Bernard Pope wrote:
Is it a goal of cabal to be able to avoid autotools?
It seems to be. The Visual Haskell Studio.NET project uses _just_ the Cabal package description to build projects. So, if your Cabal project requires autotools then it won't work in that environment. In general, it seems like at least the GHC and Hugs teams want to support Windows users that don't have MinGW/MSYS or Cygwin (which means that they don't have autotools).
Some packages interfacing to C need to discover information that varies between systems: which functions and header files are available, which header files and libraries are needed to get certain C functions, various quirks of compilers, etc. That information needs to go into the .buildinfo file, and possibly header files used by C and Haskell code. On Unix systems, autoconf does all that fairly efficiently. It's true that such packages can't be used under Windows, at least not without tweaking the .cabal or .buildinfo files by hand. That's unfortunate -- we need something new to make such packages work under Windows. But at least the answers to all those questions are (mostly) invariant across Windows systems. So perhaps we could combine autoconf for Unix with fixed files for Windows (and any other non-Unix systems that come along).

On 7/27/05, Duncan Coutts
On Wed, 2005-07-27 at 16:10 -0500, Brian Smith wrote:
In short, the cabal package description is not the only information you need to fully describe how to successfully build an application.
If we hope to have cabal packages properly supported by packaging systems (which I believe we do) then this sort of information is essential. And I think that it is important that it is discoverable automatically. If most packages in the hackage collection require manual fiddling and fixing then it will not be practical to mirror the hackage collection in normal distro packaging systems.
I think it is essential to be able to tell which build infrastructure is being used, and which hooks need to be executed, without looking at Setup.lhs. In fact, I think that the relationship between the Setup.hs and Package.cabal file is backwards. Imagine that there was no "Setup.hs," and that every Haskell implementation came with an executable that could process Cabal files. Now, let's say a Cabal package descriptions had these entries: Build-Infrastructure: Simple Hooks: None This would mean that the Simple build infrastructure is being used, and that there are no hooks; it is equivalent to Distribution.Simple.defaultMain. Note that the default value of Hooks could be "None" and the default Build-Infrastructure could be "Simple." If a package description had these entries: Build-Infrastructure: Simple Hooks: Setup (preBuild, postBuild, postClean) it has the three hooks mentioned, which are exported from the package's Setup module. Each hook is just a function with the same type it has from Distribution.Simple.UserHooks. Each implementations's setup tool would determine how to call these hooks. Now, if a package description had this entry: Build-Infrastructure: Make Then, tools that don't have Make (e.g. VHS.NET) could gracefully say "Make is not a supported build infrastructure" or "Cannot build this package because make is not installed." Also, imagine being able to do this for a Cabal package that has: Build-Depends: HUnit, HaXml $ ghci --cabal My.Module Loading package base-1.0 ... linking ... done. Loading package haskell98-1.0 ... linking ... done. Loading package HaXml-1.13 ... linking ... done. Loading package HUnit-1.1 ... linking ... done. My.Module> instead of: $ ghci -package HUnit -package HaXml - Brian

Brian Smith
I think it is essential to be able to tell which build infrastructure is being used, and which hooks need to be executed, without looking at Setup.lhs. In fact, I think that the relationship between the Setup.hs and Package.cabal file is backwards. Imagine that there was no "Setup.hs," and that every Haskell implementation came with an executable that could process Cabal files.
Then you lose the ability for the packager to do the fiddly little things that packagers need to do for their package. Cabal is designed the way it is so that you don't have to invent an entire new language to support such things.
Now, let's say a Cabal package descriptions had these entries:
Build-Infrastructure: Simple Hooks: None
So you are limiting what the packager can do to pre-determined build infrastructures. Cabal's interface is generic; anything could be in that Setup file, including stuff that no one but the packager uses. This is necessary because one size does not fit all. Cabal provides an interface, and an implementation of that interface. If the interface is lacking in some way, we can fix that.
If a package description had these entries:
Build-Infrastructure: Simple Hooks: Setup (preBuild, postBuild, postClean)
it has the three hooks mentioned, which are exported from the package's Setup module. Each hook is just a function with the same type it has from Distribution.Simple.UserHooks. Each implementations's setup tool would determine how to call these hooks.
But how do you call the user's hooks? And what if they want to do something that can't be done with hooks?
Now, if a package description had this entry:
Build-Infrastructure: Make
Then, tools that don't have Make (e.g. VHS.NET) could gracefully say "Make is not a supported build infrastructure" or "Cannot build this package because make is not installed."
Also, imagine being able to do this for a Cabal package that has: Build-Depends: HUnit, HaXml
It seems to me that adding the single field that Duncan needs is a better solution than completely changing the way cabal works, and limiting the users in the process. peace, isaac

On 7/28/05, Isaac Jones
Brian Smith
writes: (snip)
I think it is essential to be able to tell which build infrastructure is being used, and which hooks need to be executed, without looking at Setup.lhs. In fact, I think that the relationship between the Setup.hs and Package.cabal file is backwards. Imagine that there was no "Setup.hs," and that every Haskell implementation came with an executable that could process Cabal files.
Then you lose the ability for the packager to do the fiddly little things that packagers need to do for their package. Cabal is designed
Now, let's say a Cabal package descriptions had these entries:
Build-Infrastructure: Simple Hooks: None
So you are limiting what the packager can do to pre-determined build infrastructures. Cabal's interface is generic; anything could be in that Setup file, including stuff that no one but the packager uses.
"Packager" is the maintainer of a package for a given platform, seperate from the package author, right? Are you saying that the packager will replace any Setup.hs/Setup.lhs provided by the package author with his own version that does whatever "fiddly little things" are necessary to integrate the Cabal package with the system's packaging system? Is this what package maintainers for Gentoo/Debian/etc. are actually doing? It seems like they are saying they don't want to do that, and that they would rather generate their packages directly from the .cabal file. I didn't see in the documentation that we (users of Cabal) are supposed to expect this.
If a package description had these entries:
Build-Infrastructure: Simple Hooks: Setup (preBuild, postBuild, postClean)
it has the three hooks mentioned, which are exported from the package's Setup module. Each hook is just a function with the same type it has from Distribution.Simple.UserHooks. Each implementations's setup tool would determine how to call these hooks.
But how do you call the user's hooks? And what if they want to do something that can't be done with hooks?
Firstly, if "they" want to do something that can't be done by hooks, then it is likely that whatever they are doing isn't going to work in other Cabal-enabled tools like VHS.NET and other such tools (one of which I am working on). It would be nice if these tools could say "Warning: this might not build correctly because it is not using the simple build infrastructure" and/or "Warning: all build hooks will be ignored." Currently, there is not a way for these tools to do this easily. Secondly, it would be a simple matter to have the tool generate its own equivalent to the current "Setup.lhs" that looked something like: module Main(main) where import qualified Setup(preBuild, postBuild, postClean) import Distribution.Simple(defaultMainWithHooks,emptyUserHooks, UserHooks(..)) main = defaultMainWithHooks $ emptyUserHooks { preBuild = Setup.preBuild, postBuild = Setup.postBuild, postClean = Setup.postClean } I guess the important point is that there should be a way from looking at the Cabal file to determine if a tool can be built using the equivalent of "defaultMain." For example, an IDE might use the GHC API to implement an "incremental build" feature, which would rebuild projects upon detecting changes to the source files (like Eclipse does). Futhermore, it might want to provide context-sensitive features like autocomplete that Cabal doesn't provide, and that requires knowledge of all source code dependencies in the source code. Also, don't you think that GHCi should be able to read Cabal files so that you don't have to say "ghci -cpp -fglasgow-exts -package X -package Y -package Z MyModule" when all those options are already in the Cabal file?
Now, if a package description had this entry:
Build-Infrastructure: Make
Then, tools that don't have Make (e.g. VHS.NET) could gracefully say "Make is not a supported build infrastructure" or "Cannot build this package because make is not installed."
Also, imagine being able to do this for a Cabal package that has: Build-Depends: HUnit, HaXml
It seems to me that adding the single field that Duncan needs is a better solution than completely changing the way cabal works, and limiting the users in the process.
I don't see how my suggestion is limiting users, or even completely changing the way Cabal works. I will say that the current system seems inconvenient. For example, I just made some changes to the Win32 library on my local machine. I wanted to build and install the new version of Win32 using Cabal. The .cabal file is there but there is no "Setup.hs." So, I either have to add a Setup.hs myself, or reuse an existing one that is somewhere else. (The Hugs build process uses the single "fptools/libraries/Cabal/examples/hapax.hs" to build all its Cabal packages, because none of them provide their own "Setup.hs".) On one hand, it doesn't make sense to have dozens of identical "Setup.hs" files throughout fptools. On the other hand, every Cabal package is expected to have its own Setup.hs, AFAICT. - Brian

Hi Brian. Sorry if I was a little curt last time...
Brian Smith
So you are limiting what the packager can do to pre-determined build infrastructures. Cabal's interface is generic; anything could be in that Setup file, including stuff that no one but the packager uses.
"Packager" is the maintainer of a package for a given platform, seperate from the package author, right?
I mean the author or whoever has created the cabal package, not the OS package. Sorry, I know it gets confusing (believe me ;) (snip)
If a package description had these entries:
Build-Infrastructure: Simple Hooks: Setup (preBuild, postBuild, postClean)
it has the three hooks mentioned, which are exported from the package's Setup module. Each hook is just a function with the same type it has from Distribution.Simple.UserHooks. Each implementations's setup tool would determine how to call these hooks.
But how do you call the user's hooks? And what if they want to do something that can't be done with hooks?
Firstly, if "they" want to do something that can't be done by hooks, then it is likely that whatever they are doing isn't going to work in other Cabal-enabled tools like VHS.NET and other such tools
If those tools conform to the cabal interface, that is, the command-line interface and the required fields of the .cabal file, then the tool should work just fine. All of the layered tools I know of so far do not need to peer into the Setup file to determine which build system they are using. Maybe VS.NET assumes everyone is using the simple build infrastructure?
(one of which I am working on).
Can you tell me more about the tool you're working on? (snip)
I guess the important point is that there should be a way from looking at the Cabal file to determine if a tool can be built using the equivalent of "defaultMain."
I think layered tools should avoid this wherever possible. If layered tools can't work by executing the setup script, you're going to lock out the packages which roll their own build system. Please try to implement features which rely on the stated Cabal interface, or propose extensions to the interface which won't block out users who roll their own setup scripts. I would much rather keep the abstractions we've built and extend the interface rather than breaking the abstraction altogether for people who roll their own setup scripts.
For example, an IDE might use the GHC API to implement an "incremental build" feature, which would rebuild projects upon detecting changes to the source files (like Eclipse does). Futhermore, it might want to provide context-sensitive features like autocomplete that Cabal doesn't provide, and that requires knowledge of all source code dependencies in the source code.
I don't understand what you mean; how does this involve the simple build system if you're using the GHC API?
Also, don't you think that GHCi should be able to read Cabal files so that you don't have to say "ghci -cpp -fglasgow-exts -package X -package Y -package Z MyModule" when all those options are already in the Cabal file?
That would definitely be cool.
Now, if a package description had this entry:
Build-Infrastructure: Make
Then, tools that don't have Make (e.g. VHS.NET) could gracefully say "Make is not a supported build infrastructure" or "Cannot build this package because make is not installed."
Also, imagine being able to do this for a Cabal package that has: Build-Depends: HUnit, HaXml
It seems to me that adding the single field that Duncan needs is a better solution than completely changing the way cabal works, and limiting the users in the process.
I don't see how my suggestion is limiting users,
It limits users by blocking out those who use their own build infrastructure.
or even completely changing the way Cabal works.
I thought you were suggesting that a stand-alone executable read and interpret the .cabal file, then execute functions from only a pre-determined set of build infrastructures. Perhaps you're saying that the stand-alone executable should run pre-determined build infrastructures if it knows about them and otherwise the user has to provide a Setup.lhs file which gets executed with the "system" call?
I will say that the current system seems inconvenient. For example, I just made some changes to the Win32 library on my local machine. I wanted to build and install the new version of Win32 using Cabal. The .cabal file is there but there is no "Setup.hs." So, I either have to add a Setup.hs myself, or reuse an existing one that is somewhere else.
That library should have been distributed with the Setup.lhs file; you're inconvenienced because they didn't follow the Cabal interface, not because of the cabal interface.
On one hand, it doesn't make sense to have dozens of identical "Setup.hs" files throughout fptools. On the other hand, every Cabal package is expected to have its own Setup.hs, AFAICT.
For now, it is the case that each cabal package must come w/ its own Setup.lhs file. I've been toying with the idea of including a cabal-setup executable, which just calls defaultMain which can be used if ppl do use the simple build infrastructure. Right now the cabal interface is conservative so we can get a better idea of how people use it. If 90% of people end up using the same Setup.lhs file, then we'll probably stop requiring it and add a caveat that says "if there's no Setup script, then use the 'standard' one." peace, isaac

On Fri, 2005-07-29 at 12:43 -0700, Isaac Jones wrote:
For example, an IDE might use the GHC API to implement an "incremental build" feature, which would rebuild projects upon detecting changes to the source files (like Eclipse does). Futhermore, it might want to provide context-sensitive features like autocomplete that Cabal doesn't provide, and that requires knowledge of all source code dependencies in the source code.
I don't understand what you mean; how does this involve the simple build system if you're using the GHC API?
I think the requirements of an IDE on a build system are probably even more extreme than what I've been banging on about - the needs of packaging systems. Packaging systems want quite a bit of flexibility and insight into the build but don't really care too much about many internals like whether it is incremental or monolithic. This provides flexibility by allowing multiple build systems to conform to the same Cabal interface. However an IDE wants even more. It wants to be able to rebuild individual files quickly (so dependency tracking is required and linear build scripts are out). It will want to provide a GUI interface for changing build system parameters (which means that the build system has to be declarative, not scripted). And no doubt there are other things too. The point is that fulfilling these requirements might be possible for some imagined future version of the "simple" build system, however if the Cabal interface is extended to stipulate these same features then it would exclude most other build system implementations. Duncan

On 7/29/05, Isaac Jones
Brian Smith
writes: (one of which I am working on).
Can you tell me more about the tool you're working on?
Well, right now it is just a couple of really simple tools. One simply continuously builds a project. That is, if you execute: cabal-listen path/to/Package.cabal then, it automatically runs "runghc Setup.hs configure" when the Cabal file changes, it automatically does "runghc Setup.hs build" when any other file is modified, and it does "runghc Setup.hs clean && runghc Setup.hs build" when files are renamed or moved. If any errors are found (currently by grepping through the output) then it it automatically opens my editor to the location of the first error. Pressing "n" and "p" cycle through the errors. The program is efficient about listening for file changes because it uses the Windows File Change Notification API, but on large packages it is too slow to build. My hope is that I will be able to use the GHC API to improve the efficiency of the building process, by keeping GHC's internal data structures for the package cached in memory between builds, and only updating them on a module-by-module basis. Next week I want to convert it to wxHaskell. The other tool takes a Cabal file, and lists all the modules that are defined in that package, where you can e.g. double click on the module to open it in an editor or browse its structure (top-level bindings). It uses wxHaskell. I also used the GHC API to build a very crude Haddock-like tool that allowed me to browse the GHC API when I was learning it. Haddock doesn't handle GHC's recursive modules well. Using the GHC API also allowed me to do type inference automatically. But, Haddock is really a lot more polished (for example, my tool does automatic hyperlinking, but it just displays the comments in their raw form, and it doesn't reorganize things into sections and subsections)
For example, an IDE might use the GHC API to implement an "incremental build" feature, which would rebuild projects upon detecting changes to the source files (like Eclipse does). Futhermore, it might want to provide context-sensitive features like autocomplete that Cabal doesn't provide, and that requires knowledge of all source code dependencies in the source code.
I don't understand what you mean; how does this involve the simple build system if you're using the GHC API?
I think what VHS.NET does--and what I am/was planning to do--is use Cabal files as projects, but reimplement Distribution.Simple.Build et. al. to work in an better in an "interactive" GUI environment than the current system, which is batch-compile oriented. In particular, VHS.NET and maybe my tool will use the GHC API extensively. For example, using the GHC API I can do dependency analysis that will allow me to say "build just this one source file because that other source file changed." But, Cabal always restarts dependency analysis over at the root modules, which make it too slow for interactive use. I am not sure exactly how the eclipsefp tool builds files but it pauses for too long during saving (Eclipse rebuilds after every save). As another example, I want to be able to typecheck a module inside an editor when the file hasn't been saved yet (like Java tools do). I don't see how I can reliably reuse the Cabal library to do that. I also want to know which names are in scope at a particular location in a source file. In order to do that, I need to know all of the current modules dependencies (and the current module might not even have been saved even once yet.)
I don't see how my suggestion is limiting users,
It limits users by blocking out those who use their own build infrastructure.
Well, if you took out the "hooks" part of my suggestion, then I expect Cabal's API would be the same. The only limitations would come in tools that use Cabal package descriptions but don't use Cabal's API's. Or alternatively, I want to build tools that have optimizations for Cabal packages that don't require arbitrary code to execute during the build process.
I thought you were suggesting that a stand-alone executable read and interpret the .cabal file, then execute functions from only a pre-determined set of build infrastructures. Perhaps you're saying that the stand-alone executable should run pre-determined build infrastructures if it knows about them and otherwise the user has to provide a Setup.lhs file which gets executed with the "system" call?
Exactly. We could even change it to "Setup-Module:" Then, adding a "cabal-setup" tool would be minor change. Imagine such a cabal-setup tool with logic like this: If the "Build-Infrastructure" is Simple Then If there are not hooks Then -- Does not require any arbitrary code to build Distribution.Simple.defaultMain Else Dstribution.Simple.defaultMainWithHooks .... End If Else -- The setup module can do whatever it wants. Use runghc/runhugs/whatever. to execute the module identified, and return whatever exit code it returns. End If
if ppl do use the simple build infrastructure. Right now the cabal interface is conservative so we can get a better idea of how people use it. If 90% of people end up using the same Setup.lhs file, then we'll probably stop requiring it and add a caveat that says "if there's no Setup script, then use the 'standard' one."
I think that makes a lot of sense. Peace, Brian

Brian Smith
On 7/29/05, Isaac Jones
wrote: Brian Smith
writes: (one of which I am working on).
Can you tell me more about the tool you're working on?
Well, right now it is just a couple of really simple tools.
(snip description of cool tools)
I think what VHS.NET does--and what I am/was planning to do--is use Cabal files as projects, but reimplement Distribution.Simple.Build et. al. to work in an better in an "interactive" GUI environment than the current system, which is batch-compile oriented. In particular, VHS.NET and maybe my tool will use the GHC API extensively.
For example, using the GHC API I can do dependency analysis that will allow me to say "build just this one source file because that other source file changed." But, Cabal always restarts dependency analysis over at the root modules, which make it too slow for interactive use.
(snip) Why not work to speed up Cabal's execution time rather than reimplement so much from scratch, and in a compiler-dependent way? There's nothing inherent about Cabal's interface that makes it do things in a batch, or slows compilation time. It uses GHC's --make flag (though it always relinks the library, which is not always necessary). If your tool is written in Haskell, and could be made compiler agnostic, perhaps we could add such features to cabal itself... maybe we could have a "./setup build --continuous" which emits status to a file or uses a socket interface or something in order to communicate w/ eclipse, VS, and your tool, since they often want the same kind of information. peace, isaac

Brian Smith
On 7/27/05, Duncan Coutts
wrote: In particular the ebuild must list all the dependencies of the package. Required Haskell libraries are listed in the "build-depends:" field in the cabal file, so that one is easy. However it is not clear how to find out the build tools that the cabal package requires.
First of all, you would have to parse the Setup.lhs/Setup.hs file to determine which build infrastructure is being used, whether any hooks are being used, etc. Assuming that the build infrastructure is "Simple" and there are no hooks being used, then you have choices:
(snip) That's a little over-the-top; maybe that's your point, though I don't see any problem with adding a new field for tools, not just packages. I think that would solve Duncan's problem. peace, isaac
participants (5)
-
Bernard Pope
-
Brian Smith
-
Duncan Coutts
-
Isaac Jones
-
ross@soi.city.ac.uk