
Hi Ross, I'm experiencing the following problem. I'm trying to use a 'configure' script which is not a 'sh' script, but Cabal assumes that all 'configure' scripts are 'sh' scripts regardless of what '#!' line they start with. Isaac says this is your doing. What is the reason for it? The lines are in cabal/Distribution/Simple.hs: where defaultPostConf :: Args -> ConfigFlags -> LocalBuildInfo -> IO ExitCode defaultPostConf args flags lbi = do let prefix_opt pref opts = ("--prefix=" ++ pref) : opts confExists <- doesFileExist "configure" if confExists then do rawSystemVerbose (configVerbose flags) "sh" ("configure" : maybe id prefix_opt (configPrefix flags) args) return () else no_extra_flags args return ExitSuccess Note that if a file doesn't have any '#!' line at all then (at least on my system) it is executed with /bin/sh. So when you run something explicitly with /bin/sh, you are ONLY affecting things which explicitly ask to be run with something other than /bin/sh. Isaac refuses to let me fix this until you speak up because he's afraid it might break the fptools build, so, ... Frederik -- http://ofb.net/~frederik/

On Sat, Aug 20, 2005 at 05:14:33PM -0700, Frederik Eaton wrote:
I'm experiencing the following problem. I'm trying to use a 'configure' script which is not a 'sh' script, but Cabal assumes that all 'configure' scripts are 'sh' scripts regardless of what '#!' line they start with. Isaac says this is your doing. What is the reason for it?
It's a kludge to make this work under MinGW, where #! isn't available for programs invoked from Haskell. I've now changed it to use sh only under MinGW, though this isn't a perfect solution.

Thanks. I guess you could try to parse the "#!" line if MinGW is detected? ... Anyway, it would seem that support for running "#!" scripts from Haskell is a problem which is much more general than Cabal and should be solved more centrally, like in the standard libraries. Cheers, Frederik On Thu, Aug 25, 2005 at 01:48:54AM +0100, Ross Paterson wrote:
On Sat, Aug 20, 2005 at 05:14:33PM -0700, Frederik Eaton wrote:
I'm experiencing the following problem. I'm trying to use a 'configure' script which is not a 'sh' script, but Cabal assumes that all 'configure' scripts are 'sh' scripts regardless of what '#!' line they start with. Isaac says this is your doing. What is the reason for it?
It's a kludge to make this work under MinGW, where #! isn't available for programs invoked from Haskell. I've now changed it to use sh only under MinGW, though this isn't a perfect solution.

Hello Frederik, Thursday, August 25, 2005, 7:37:39 AM, you wrote: FE> I guess you could try to parse the "#!" line if MinGW is detected? ... FE> Anyway, it would seem that support for running "#!" scripts from FE> Haskell is a problem which is much more general than Cabal and should FE> be solved more centrally, like in the standard libraries. i strongly agree. ignoring of first line if it starts from "#" must be added to all Haskell realizations -- Best regards, Bulat mailto:bulatz@HotPOP.com

On Thu, Aug 25, 2005 at 12:03:25PM +0400, Bulat Ziganshin wrote:
Thursday, August 25, 2005, 7:37:39 AM, Frederik Eaton wrote: FE> I guess you could try to parse the "#!" line if MinGW is detected? ... FE> Anyway, it would seem that support for running "#!" scripts from FE> Haskell is a problem which is much more general than Cabal and should FE> be solved more centrally, like in the standard libraries.
i strongly agree. ignoring of first line if it starts from "#" must be added to all Haskell realizations
I think Frederik is suggesting that under Windows, System.Cmd.rawSystem and friends should examine the start of the file it is asked to execute, looking for #! and then try to simulate Unix behaviour. That might help some things, but it won't always work, so may not be worth it. Haskell implementations already ignore lines starting with # if the file is a literate script.

2005/8/25, Ross Paterson
On Thu, Aug 25, 2005 at 12:03:25PM +0400, Bulat Ziganshin wrote:
Thursday, August 25, 2005, 7:37:39 AM, Frederik Eaton wrote: FE> I guess you could try to parse the "#!" line if MinGW is detected? ... FE> Anyway, it would seem that support for running "#!" scripts from FE> Haskell is a problem which is much more general than Cabal and should FE> be solved more centrally, like in the standard libraries.
i strongly agree. ignoring of first line if it starts from "#" must be added to all Haskell realizations
I think Frederik is suggesting that under Windows, System.Cmd.rawSystem and friends should examine the start of the file it is asked to execute, looking for #! and then try to simulate Unix behaviour. That might help some things, but it won't always work, so may not be worth it.
Haskell implementations already ignore lines starting with # if the file is a literate script.
It isn't so easy to simulate #! behaviour in rawSystem because the file path after #! is in Unix style. Cygwin keeps the mapping between Unix style paths and the native Windows paths. Usually /usr/bin/sh is mapped to something like c:\cygwin\bin\sh.exe. All executables which are compiled with cygwin.dll runtime library are working with Unix paths which are silently mapped to native paths. All GHC compiled executables are linked to the native msvcrt.dll runtime library so they understands only the native paths. I don't think that rawSystem should try to emulate Unix behaviour. Cheers, Krasimir

On Fri, Aug 26, 2005 at 09:37:20AM +0300, Krasimir Angelov wrote:
2005/8/25, Ross Paterson
: On Thu, Aug 25, 2005 at 12:03:25PM +0400, Bulat Ziganshin wrote:
Thursday, August 25, 2005, 7:37:39 AM, Frederik Eaton wrote: FE> I guess you could try to parse the "#!" line if MinGW is detected? ... FE> Anyway, it would seem that support for running "#!" scripts from FE> Haskell is a problem which is much more general than Cabal and should FE> be solved more centrally, like in the standard libraries.
i strongly agree. ignoring of first line if it starts from "#" must be added to all Haskell realizations
I think Frederik is suggesting that under Windows, System.Cmd.rawSystem and friends should examine the start of the file it is asked to execute, looking for #! and then try to simulate Unix behaviour. That might help some things, but it won't always work, so may not be worth it.
Haskell implementations already ignore lines starting with # if the file is a literate script.
It isn't so easy to simulate #! behaviour in rawSystem because the file path after #! is in Unix style. Cygwin keeps the mapping between Unix style paths and the native Windows paths. Usually /usr/bin/sh is mapped to something like c:\cygwin\bin\sh.exe. All executables which are compiled with cygwin.dll runtime library are working with Unix paths which are silently mapped to native paths. All GHC compiled executables are linked to the native msvcrt.dll runtime library so they understands only the native paths. I don't think that rawSystem should try to emulate Unix behaviour.
Then maybe Cabal needs to be linked to cygwin.dll? I don't know anything about Cygwin, or MinGW, or what the difference is between the two, but if a program written in Haskell running *within* Cygwin can't execute a #! script using rawSystem then something is wrong. Frederik -- http://ofb.net/~frederik/

On Fri, 2005-08-26 at 18:36 -0700, Frederik Eaton wrote:
On Fri, Aug 26, 2005 at 09:37:20AM +0300, Krasimir Angelov wrote:
It isn't so easy to simulate #! behaviour in rawSystem because the file path after #! is in Unix style. Cygwin keeps the mapping between Unix style paths and the native Windows paths. Usually /usr/bin/sh is mapped to something like c:\cygwin\bin\sh.exe. All executables which are compiled with cygwin.dll runtime library are working with Unix paths which are silently mapped to native paths. All GHC compiled executables are linked to the native msvcrt.dll runtime library so they understands only the native paths. I don't think that rawSystem should try to emulate Unix behaviour.
Then maybe Cabal needs to be linked to cygwin.dll? I don't know anything about Cygwin, or MinGW, or what the difference is between the two, but if a program written in Haskell running *within* Cygwin can't execute a #! script using rawSystem then something is wrong.
GHC on windows is now a native application and it compiles native windows applications. That is it doesn't link with any unix emulation libraries. For the vast majority of windows users this is a good thing. (GHC has become vastly more popular amongst windows users since it stopped depending on cygwin.) I think we just have to accept that windows doesn't understand the #! thing. If fact cabal packages that use configure scripts are not going to be portable to windows anyway since we cannot assume that users have MinGW/cygwin installed. That is one of the reasons to favour the simple cabal build system, because it will work on windows without any unix utiities like sh & make etc. Duncan

On Sat, Aug 27, 2005 at 10:38:42AM +0100, Duncan Coutts wrote:
I think we just have to accept that windows doesn't understand the #! thing.
Indeed. (though the original setting was MinGW+MSYS, not Cygwin)
If fact cabal packages that use configure scripts are not going to be portable to windows anyway since we cannot assume that users have MinGW/cygwin installed. That is one of the reasons to favour the simple cabal build system, because it will work on windows without any unix utiities like sh & make etc.
For some packages that interface to C libraries, configure solves a real problem on Unix systems, and we need a way to solve that problem on Win32. It might be enough to include Win32 versions of the files that configure generates, and on Win32 to just copy those instead of running configure.

Ross Paterson wrote:
On Sat, Aug 27, 2005 at 10:38:42AM +0100, Duncan Coutts wrote:
I think we just have to accept that windows doesn't understand the #! thing.
Indeed. (though the original setting was MinGW+MSYS, not Cygwin)
If fact cabal packages that use configure scripts are not going to be portable to windows anyway since we cannot assume that users have MinGW/cygwin installed. That is one of the reasons to favour the simple cabal build system, because it will work on windows without any unix utiities like sh & make etc.
For some packages that interface to C libraries, configure solves a real problem on Unix systems, and we need a way to solve that problem on Win32. It might be enough to include Win32 versions of the files that configure generates, and on Win32 to just copy those instead of running configure.
I would suggest that, while configure does solve a problem, it isn't the best way to solve the problem. A properly abstracted and layered implementation of O/S specific calls, with each environment supported by an implementation file, is much closer to "doing the right thing." It's true that in such a setup many of the implementation files would be almost identical. I don't see this as a problem; I see it merely as reflecting the actual situation, which is that the supported environments are almost identical. So I agree with Ross that for win32 we should have a set of interface files. I would also assert that for all supported environments we should have a set of interface files. I did exactly this for an open source project and it worked flawlessly. However, people wanted to know why it didn't use configure. If configure identifies the environment and copies the correct files, that would satisfy the need for consistency (that is, for this package you use ./configure just as you do for many other packages). Using the methodology of configure, in my mind, is embracing ann ugly philosophy. I do realize that this position is more or less tilting at windmills. Seth
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Sat, 2005-08-27 at 09:02 -0400, Seth Kurtzberg wrote:
Ross Paterson wrote:
For some packages that interface to C libraries, configure solves a real problem on Unix systems, and we need a way to solve that problem on Win32. It might be enough to include Win32 versions of the files that configure generates, and on Win32 to just copy those instead of running configure.
I would suggest that, while configure does solve a problem, it isn't the best way to solve the problem. A properly abstracted and layered implementation of O/S specific calls, with each environment supported by an implementation file, is much closer to "doing the right thing."
[snip]
However, people wanted to know why it didn't use configure. If configure identifies the environment and copies the correct files, that would satisfy the need for consistency (that is, for this package you use ./configure just as you do for many other packages). Using the methodology of configure, in my mind, is embracing ann ugly philosophy.
I think the problem here is not the configure philisophy but its implementation using standard unix tools. That obviously doesn't work on windows. However the idea of configure I think is sound. Instead of creating a bit of code for each platform you want to support you identify features of the environment instead (ie "does ld support the '-x' flag on this system?"). That way you can port to systems you'd never thought of, or cope with changes in a target platform more easily. Duncan

On Sat, Aug 27, 2005 at 02:20:36PM +0100, Duncan Coutts wrote:
On Sat, 2005-08-27 at 09:02 -0400, Seth Kurtzberg wrote:
I would suggest that, while configure does solve a problem, it isn't the best way to solve the problem. A properly abstracted and layered implementation of O/S specific calls, with each environment supported by an implementation file, is much closer to "doing the right thing."
I think the problem here is not the configure philisophy but its implementation using standard unix tools. That obviously doesn't work on windows.
I don't think that even that is a problem. In a world of dozens of varieties of Unix running on dozens of architectures, plus a handful of other os/arch combinations (widespread though one of them is), it is efficient to use existing Unix tools to handle the vast majority of cases, with special treatment for the few exceptions. One could redo autoconf in Haskell, but why bother?

Am Samstag, 27. August 2005 15:02 schrieb Seth Kurtzberg:
[...] I would suggest that, while configure does solve a problem, it isn't the best way to solve the problem. A properly abstracted and layered implementation of O/S specific calls, with each environment supported by an implementation file, is much closer to "doing the right thing."
Well, I don't want to start a Jihad regarding the usefulness of autoconf, the autoconf documentation itself contains a rather good explanation why testing features is far superior than assuming a fixed (and probably much too small) set of platforms in advance. I only want to point out that the autotools solve problems which go *far* beyond anything which could be achieved by writing simple abstractions for platform features: It can find out if your compiler/linker/library/header/... has a certain bug (the autoconf macros are full of examples for every category), which version of an API (which might have changed in a non-backwards-compatible way, see e.g. OpenAL) is actually contained in a library/header, which dozens of (often proprietary) linker options are needed to use a certain feature, how to create and use a dynamic library, etc. etc. Simply writing an abstraction layer would solve none of the problems mentioned above. Of course all these problems are bad and should not be there at all, but simply ignoring them means closing one's eyes before the current "state-of-the-art" in real-life computer science. And for a casual user, these are all *hard* problems! Trying to solve these problems without autotools, one usually ends up re-inventing the wheel (i.e. writing autotools-like code), but probably much, much worse (see e.g. qmake).
[...] I do realize that this position is more or less tilting at windmills.
I'd really be happy to learn how the problems mentioned above could be solved without autotools or basically re-inventing autotools, seriously. I hate writing obscure lines in M4 and sh probably as much as you do, but I can't see a viable alternative. Rewriting all this stuff (plus all the utilities used in the macros!) in Haskell doesn't look very attractive and realistic... Cheers, S.

Sven Panne wrote:
Am Samstag, 27. August 2005 15:02 schrieb Seth Kurtzberg:
[...] I would suggest that, while configure does solve a problem, it isn't the best way to solve the problem. A properly abstracted and layered implementation of O/S specific calls, with each environment supported by an implementation file, is much closer to "doing the right thing."
Well, I don't want to start a Jihad regarding the usefulness of autoconf, the autoconf documentation itself contains a rather good explanation why testing features is far superior than assuming a fixed (and probably much too small) set of platforms in advance.
I only want to point out that the autotools solve problems which go *far* beyond anything which could be achieved by writing simple abstractions for platform features: It can find out if your compiler/linker/library/header/... has a certain bug (the autoconf macros are full of examples for every category), which version of an API (which might have changed in a non-backwards-compatible way, see e.g. OpenAL) is actually contained in a library/header, which dozens of (often proprietary) linker options are needed to use a certain feature, how to create and use a dynamic library, etc. etc. Simply writing an abstraction layer would solve none of the problems mentioned above. Of course all these problems are bad and should not be there at all, but simply ignoring them means closing one's eyes before the current "state-of-the-art" in real-life computer science. And for a casual user, these are all *hard* problems! Trying to solve these problems without autotools, one usually ends up re-inventing the wheel (i.e. writing autotools-like code), but probably much, much worse (see e.g. qmake).
[...] I do realize that this position is more or less tilting at windmills.
I'd really be happy to learn how the problems mentioned above could be solved without autotools or basically re-inventing autotools, seriously. I hate writing obscure lines in M4 and sh probably as much as you do, but I can't see a viable alternative. Rewriting all this stuff (plus all the utilities used in the macros!) in Haskell doesn't look very attractive and realistic...
I'd have to turn the question around. In several major projects, I've never come across a situation where any of the autoconf hacks are necessary. I wouldn't reinvent autoconf. If I needed it, I would use it. I just have never needed it. The problem with autoconf is that you have no idea, watching it run, which of the many things it tests are actually used. It does all the same tests for all programs. I did apply a tool to two large projects that automatically generated autoconf support. It worked fine, but then since the code compiled just fine without it, that doesn't really show anything one way or the other. The without autoconf code was built on linux, freebsd, netbsd, solaris, sunos, SGI's UNIX, and HP's UNIX, as well as, of course, win32. Of course there was no win32-autoconf issue, because there was no autoconf at all; just three files copied. The UNIX systems tested is a subset, but a fairly large subset. It is possible that there are issues that weren't exposed by that set of UNIX environments, but I haven't had any reports of this. The interface files were, total, about 200 lines. The differences among them were minor, but there were differences.
Cheers, S.

On Tue, 2005-08-30 at 00:41 -0400, Seth Kurtzberg wrote:
I'd have to turn the question around. In several major projects, I've never come across a situation where any of the autoconf hacks are necessary. I wouldn't reinvent autoconf. If I needed it, I would use it. I just have never needed it.
The problem with autoconf is that you have no idea, watching it run, which of the many things it tests are actually used. It does all the same tests for all programs.
I did apply a tool to two large projects that automatically generated autoconf support. It worked fine, but then since the code compiled just fine without it, that doesn't really show anything one way or the other.
Sounds like you were just using a boilerplate configure.ac file with some "standard" set of useful tests that someone had thoguht up once (or grown over time from experience in other projects). I work on a Haskell project that uses autoconf and automake. We started with an essentially empty configure.ac file. We have added tests whenever we found they were necessary. For example to test for minimum versions of ghc and other build tools and for the appropriate C compiler flags for some C libraries. We've also had to add and fix things when compiling on new platforms (which now include Linux, Windows, Mac OS X, FreeBSD, OpenBSD, Solaris). (The configure script works fine on Windows but obviously only with MinGW installed.) So everything in our configure script has a purpose (on one platform or another). Duncan

Sven Panne
I'd really be happy to learn how the problems mentioned above could be solved without autotools or basically re-inventing autotools, seriously. I hate writing obscure lines in M4 and sh probably as much as you do, but I can't see a viable alternative. Rewriting all this stuff (plus all the utilities used in the macros!) in Haskell doesn't look very attractive and realistic...
I should point out that re-inventing autotools has never been a goal of Cabal. We do work to detect a few things, like the ghc version and such, but I don't see this expanding into a reimplementation of autotools. We have the ability to interface with autotools, though, which I think is appropriate. peace, isaac

Isaac Jones wrote:
Sven Panne
writes: (snip)
I'd really be happy to learn how the problems mentioned above could be solved without autotools or basically re-inventing autotools, seriously. I hate writing obscure lines in M4 and sh probably as much as you do, but I can't see a viable alternative. Rewriting all this stuff (plus all the utilities used in the macros!) in Haskell doesn't look very attractive and realistic...
I should point out that re-inventing autotools has never been a goal of Cabal. We do work to detect a few things, like the ghc version and such, but I don't see this expanding into a reimplementation of autotools. We have the ability to interface with autotools, though, which I think is appropriate.
I didn't intend to say or imply anything about Cabal and autoconf. Sorry about any confusion. I was talking about autoconf in general. I argued (still arguing :-) ) that autoconf is not the best way to handle platform variations. Sven argued that, if I did use my concept, I'd end up reimplementing autoconf. (I still don't buy this. :-) ) Again, sorry for any confusion; personally, sorry for being unclear. Seth Sven: It is probably a good idea to take this to it's own thread. Seth
peace,
isaac _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

On Sat, Aug 27, 2005 at 10:38:42AM +0100, Duncan Coutts wrote:
On Fri, 2005-08-26 at 18:36 -0700, Frederik Eaton wrote:
On Fri, Aug 26, 2005 at 09:37:20AM +0300, Krasimir Angelov wrote:
It isn't so easy to simulate #! behaviour in rawSystem because the file path after #! is in Unix style. Cygwin keeps the mapping between Unix style paths and the native Windows paths. Usually /usr/bin/sh is mapped to something like c:\cygwin\bin\sh.exe. All executables which are compiled with cygwin.dll runtime library are working with Unix paths which are silently mapped to native paths. All GHC compiled executables are linked to the native msvcrt.dll runtime library so they understands only the native paths. I don't think that rawSystem should try to emulate Unix behaviour.
Then maybe Cabal needs to be linked to cygwin.dll? I don't know anything about Cygwin, or MinGW, or what the difference is between the two, but if a program written in Haskell running *within* Cygwin can't execute a #! script using rawSystem then something is wrong.
GHC on windows is now a native application and it compiles native windows applications. That is it doesn't link with any unix emulation libraries. For the vast majority of windows users this is a good thing. (GHC has become vastly more popular amongst windows users since it stopped depending on cygwin.)
I think we just have to accept that windows doesn't understand the #! thing.
It seems that it should be possible for the same installation of ghc to behave both ways depending on the environment in which it is run. If ghc running in unix recognizes "#!", then it seems that ghc running in a unix emulation environment inside Windows should do the same thing, for compatibility, even if the default behavior in the DOS prompt or whatever it is called is different. Frederik -- http://ofb.net/~frederik/
participants (8)
-
Bulat Ziganshin
-
Duncan Coutts
-
Frederik Eaton
-
Isaac Jones
-
Krasimir Angelov
-
Ross Paterson
-
Seth Kurtzberg
-
Sven Panne