On Sat, Jun 1, 2013 at 8:43 AM, Rustom Mody <rustompmody@gmail.com> wrote:


On Sat, Jun 1, 2013 at 8:14 AM, Brandon Allbery <allbery.b@gmail.com> wrote:
On Fri, May 31, 2013 at 10:20 PM, Gan Uesli Starling <gan@starling.us> wrote:
So, it would be something to allow an author to issue programs which end-users would NOT have to know anything about Haskell itself and would have to, at most, perform a two-step, wholly automatic installation procedure. Short of this, anything I might aspire to give away free to the public en masse, could not conceivably be written in Haskell. In which case, I'll respectfully bow out from endeavoring to learn it myself, however useful it serves for many another purpose.

Aside from system libraries, if you don't configure your packages to be dynamic GHC works that way. You can also force system libraries if you use -static; but note that this also links libc static, which is problematic on at least Linux and Solaris. (Usually, the only problematic system library is gmp.)

I believe the problem is deeper than just moving executables from here to there.

No, there is nothing "deep" about this, and there's nothing specific to Haskell either...

As Brandon explained, the notion of a "turn-key compiler" itself isn't adapted here, not because it is Haskell but because ghc is a real compiler : it takes Haskell programs an compile them to native code, that is an executable that can be run directly on the OS/architecture you've compiled for.

What I'm going to explain isn't Haskell specific at all but rather general to all compiled languages (C, C++, Go, Pascal, Ada, ...) :
So in the beginning the compiler took your code, your libraries and all that and made an self-sufficient executable that contained the native code that could run on your machine and all was good in the world. But the problem is that you put in all this content from your libraries, some of them all your programs are using, and that's a waste, you've got the same native code replicated dozens, hundreds of time all throughout your executables... So we invented shared libraries (.dll on Windows, .so on Linux/Unix) that are already compiled and contains the native code from a library and now when compiling (and linking) an executable, we don't include the library code, we just put in a reference to the shared library and thus our executable are smaller !
So we're better off now, no ? Well.... What if you download an executable and you don't have all the shared libraries it needs (or the wrong versions) ? Suddenly your executable is not self-sufficient anymore !

In Linux, we resolved the problem by using package manager that know what libraries need to be installed for each package to work but it means that if you're not using your manager to install a soft, you'll have to install manually the libraries you need (which is probably what happened to Rustom, its debian upgrade removed or upgraded the libgmp.so his manually installed cabal needed).
In Windows we resolved the problem by... nah, let's just pretend we don't care and let the application developers on their own ! Which is the source of dll hell and the reason for which every single application comes with all the dll that are not explicitly provided by windows itself, so that you have a ton of identical versions of the same library in your filesystem.

Now in practice, you could still compile all static (no shared libraries, all necessary library code is included in the executable) with the right options to GHC, or you could just compile normally and include the shared libraries in the zip you distribute (on Windows, on Linux it would be better to use the package managers facilities).

--
Jedaï