
"Simon Marlow"
On 26 October 2004 22:34, Isaac Jones wrote:
"Simon Marlow"
writes: (snip)
3. We add CPP to the list of extensions in Cabal, so you can say {-# LANGUAGE CPP #-} to get C preprocessing in the current file, or add CPP to the list of extensions in the package description to get CPP on every file. That doesn't preclude also using a .cpp extension, but it means you don't have to.
For hugs this would mean preprocessing all files and putting the new .hs files into a temp directory and compiling these.
Yes, modulo Henrik's comments. I think you mean put the preprocessed .hs files into the dist/build/ directory and install them from there.
Right, but I was actually talking about the 'build' step, which is a bit funny... For the build step, we have to do preprocessing still, so that the developer or user of VS can use the code. For the install step, unlike the other systems, we're proposing that we do the preprocessing here rather than at the build step since presumably it would be more convinient to be platform independent at that point? I'm not sure why this is such a great idea. For each other haskell implementation, once you build, it's tied to the platform, and you no longer need the build tools. Why not do the same thing for hugs? Otherwise, the install step won't be based on the sources produced during the build step, but rather at the unpreprocessed sources. Usually the build step prepares the sources. Also, what if there are other preprocessors that have to be run after cpp? Are we going to require these to be installed on the target machine too? Sorry if I'm going over ground we've already covered... I am in a training class this week and don't have much internet time. Feel free to point me to the archive URLs for any of my questions :) peace, isaac

Hi Isaac, CC Others
Right, but I was actually talking about the 'build' step, which is a bit funny... For the build step, we have to do preprocessing still, so that the developer or user of VS can use the code.
For the install step, unlike the other systems, we're proposing that we do the preprocessing here rather than at the build step since presumably it would be more convenient to be platform independent at that point?
I'm not sure why this is such a great idea. For each other haskell implementation, once you build, it's tied to the platform, and you no longer need the build tools. Why not do the same thing for hugs?
Otherwise, the install step won't be based on the sources produced during the build step, but rather at the unpreprocessed sources. Usually the build step prepares the sources.
Also, what if there are other preprocessors that have to be run after cpp? Are we going to require these to be installed on the target machine too?
Sorry if I'm going over ground we've already covered... I am in a training class this week and don't have much internet time. Feel free to point me to the archive URLs for any of my questions :)
I'm not sure I quite understand. But maybe it would be useful if I outline the basic approach we took in the Yale/Yampa build system in this respect. One main design goal was that compilers and interpreters should be treated as similarly as possible. To that end, we decided that preprocessed sources (all forms of preprocessing: cpp, arrows, ...) are to an interpreter what object code is to a compiler. [We even applied this principle for applications by generating a wrapper to invoke the application when building for an interpreter (using runhugs in the case of Hugs) to play the role of the linked executable that would be produced when using a compiler.] Thus, for both compilers and interpreters, building a library or an application from a source distribution means that the entire tool chain has to be available. In our system, we don't put generated code in a dist/build directory for this case (possibly a mistake). Things are built "in place", and then copied to the installation directories. Cpp-ing for Hugs is a special case and is delayed until the installation step. This does not mean that building is a "no op" for an interpreter like Hugs. There can still be quite a bit of building, e.g. running Happy and Greencard. Moreover, we also run the arrows preprocessor during the build step (we had adopted special file name extensions to identify arrowized source). But I guess that arrow preprocessing will have to be handled more like cpp-ing under Cabal. Our idea for handling binary distributions (although we never got around to fully implement this), was to "install" to a temporary directory, and then to wrap up that directory along with a script (a specially generated Makefile in our case) that could carry out the real installation later. Thus, for a compiler, a binary installation is what you would expect. In particular, no special tools are needed when the end-user installs on his or her system. For an interpreter, a "binary distribution" means that the preprocessed sources ends up in the distribution. Again, the end-user then do not need any special tools for installation. But of course, just as for a normal binary distribution, the preprocessed sources are likely to be platform specific. Thus, the same pros and cons would hold for binary versus source distribution both for compilers and interpreters. If one wants the flexibility of choosing the Haskell system and the operating system platform at installation time, then one needs a source distribution. If one is interested in installing a library for a particular Haskell system on a particular platform with the least amount of fuss (and without a complete tool chain), then one can pick a "binary" distribution if available. I hope this at least clarifies the approach we took in the Yale/Yampa build system. I think it is fairly natural and general, although possibly a bit naive when it comes to distributions. Best regards, /Henrik -- Henrik Nilsson School of Computer Science and Information Technology The University of Nottingham nhn@cs.nott.ac.uk This message has been scanned but we cannot guarantee that it and any attachments are free from viruses or other damaging content: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.

Hi Henrik, Your outline of how the Yale build system treats preprocessing made sense to me, except for the special case of delaying CPP for Hugs until installation. I think that's where my confusion came in. What advantages is there in delaying CPP until installation time?
Thus, the same pros and cons would hold for binary versus source distribution both for compilers and interpreters. If one wants the flexibility of choosing the Haskell system and the operating system platform at installation time, then one needs a source distribution. If one is interested in installing a library for a particular Haskell system on a particular platform with the least amount of fuss (and without a complete tool chain), then one can pick a "binary" distribution if available.
Cabal almost supports this trade-off now; we don't really have a binary distribution as-such, though we plan to support it... we build into a temporary directory and install just moves the built stuff into place and does the registration. The only missing part is the creation of a binary tarball or something. We don't have a platform-independent way to do this yet. For sdist all we do is call 'tar' with the zip option, but this is completely broken on windows so far. I'll happily accept patches that'll make this code more generic. That should make 'bdist' easy to implement. peace, isaac
participants (2)
-
Henrik Nilsson
-
Isaac Jones