Arch Haskell News: Nov 23 2008

News about Haskell on Arch Linux * Arch now has 734 Haskell packages now * That’s an increase of 29 new packages in the last 8 days! * 3.6 new Haskell releases are occuring each day. Noteworthy, * haskell-hledger-0.2: “A ledger-compatible text-based accounting tool.” * gitit-0.3.1: “Wiki using HAppS, git, and pandoc.” * lhc-20081121: “Lhc Haskell Compiler” * haskell-hosc-0.6: “Haskell Open Sound Control” * haskell-flickr-0.3.2: “Haskell binding to the Flickr API” * haskell-delicious-0.3.2: “Accessing the del.icio.us APIs from Haskell (v2)” * haskell-mediawiki 0.2.3: “Interfacing with the MediaWiki API” * darcs-2.1.2.2: “a distributed, interactive, smart revision control system” Full update list, http://archhaskell.wordpress.com/2008/11/24/arch-haskell-news-nov-23-2008/ -- Don

Don Stewart wrote:
Noteworthy,
* lhc-20081121: “Lhc Haskell Compiler”
Interesting. I can't find out any information about this... From time to time you do hear about Haskell compilers that aren't GHC, but I'm not aware of any other compilers that are production-grade yet. Has anybody ever made one? (Hugs is the only one I know of...)

Andrew Coppin wrote:
Don Stewart wrote:
Noteworthy, * lhc-20081121: “Lhc Haskell Compiler”
Interesting. I can't find out any information about this...
It is a fork of the JHC compiler, which should be easier to look up. There is also Hugs, as you mentioned. In addition, you may want to look at YHC and NHC. - Jake

Jake McArthur wrote:
Andrew Coppin wrote:
Don Stewart wrote:
Noteworthy, * lhc-20081121: “Lhc Haskell Compiler”
Interesting. I can't find out any information about this...
It is a fork of the JHC compiler, which should be easier to look up. There is also Hugs, as you mentioned. In addition, you may want to look at YHC and NHC.
Yeah, the "implementations" page on the Wiki basically says that there's GHC and Hugs, and there's also these things called YHC, NHC and JHC. All the documentation I've read makes these latter compilers sound highly experimental and unusable. (I don't recall specifically which of them, but I remember hearing it can't even compile the Prelude yet.) They seem like small projects which are probably interesting to hack with, but not much use if you're trying to produce production-grade compiled code to give to a customer... OTOH, I haven't ever attempted to *use* any of these compilers. I only read about them...

On Wed, Nov 26, 2008 at 09:35:01PM +0000, Andrew Coppin wrote:
It is a fork of the JHC compiler, which should be easier to look up. There is also Hugs, as you mentioned. In addition, you may want to look at YHC and NHC.
Yeah, the "implementations" page on the Wiki basically says that there's GHC and Hugs, and there's also these things called YHC, NHC and JHC. All the documentation I've read makes these latter compilers sound highly experimental and unusable.
I would't call nhc experimental; it's quite usable, at least for standard Haskell-98 stuff (plus some language extensions). Ciao, Kili

On Wed, Nov 26, 2008 at 4:58 PM, Matthias Kilian
On Wed, Nov 26, 2008 at 09:35:01PM +0000, Andrew Coppin wrote:
It is a fork of the JHC compiler, which should be easier to look up. There is also Hugs, as you mentioned. In addition, you may want to look at YHC and NHC.
Yeah, the "implementations" page on the Wiki basically says that there's GHC and Hugs, and there's also these things called YHC, NHC and JHC. All the documentation I've read makes these latter compilers sound highly experimental and unusable.
I would't call nhc experimental; it's quite usable, at least for standard Haskell-98 stuff (plus some language extensions).
How old is nhc? I've always thought of it as one of the "big three",
but I don't really know how far back it goes compared to ghc.
--
Dave Menendez

On Wed, Nov 26, 2008 at 11:14 PM, David Menendez
How old is nhc? I've always thought of it as one of the "big three", but I don't really know how far back it goes compared to ghc.
The following page suggests that it was released mid 1994 but there could of course have been earlier releases. http://www.cs.chalmers.se/pub/haskell/nhc/old/ Perhaps Malcolm Wallace knows more. Cheers, Josef

On 2008 Nov 26, at 16:58, Matthias Kilian wrote:
On Wed, Nov 26, 2008 at 09:35:01PM +0000, Andrew Coppin wrote:
It is a fork of the JHC compiler, which should be easier to look up. There is also Hugs, as you mentioned. In addition, you may want to look at YHC and NHC.
Yeah, the "implementations" page on the Wiki basically says that there's GHC and Hugs, and there's also these things called YHC, NHC and JHC. All the documentation I've read makes these latter compilers sound highly experimental and unusable.
I would't call nhc experimental; it's quite usable, at least for standard Haskell-98 stuff (plus some language extensions).
On a related topic: whatever happened to the compiler shootout? (Aside from dons leaving unsw) -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

allbery:
On 2008 Nov 26, at 16:58, Matthias Kilian wrote:
On Wed, Nov 26, 2008 at 09:35:01PM +0000, Andrew Coppin wrote:
It is a fork of the JHC compiler, which should be easier to look up. There is also Hugs, as you mentioned. In addition, you may want to look at YHC and NHC.
Yeah, the "implementations" page on the Wiki basically says that there's GHC and Hugs, and there's also these things called YHC, NHC and JHC. All the documentation I've read makes these latter compilers sound highly experimental and unusable.
I would't call nhc experimental; it's quite usable, at least for standard Haskell-98 stuff (plus some language extensions).
On a related topic: whatever happened to the compiler shootout? (Aside from dons leaving unsw)
Malcolm continues the tradition, http://code.haskell.org/nobench/

Am 27.11.2008 um 09:23 schrieb Don Stewart:
allbery:
On 2008 Nov 26, at 16:58, Matthias Kilian wrote:
On Wed, Nov 26, 2008 at 09:35:01PM +0000, Andrew Coppin wrote:
It is a fork of the JHC compiler, which should be easier to look up. There is also Hugs, as you mentioned. In addition, you may want to look at YHC and NHC.
Yeah, the "implementations" page on the Wiki basically says that there's GHC and Hugs, and there's also these things called YHC, NHC and JHC. All the documentation I've read makes these latter compilers sound highly experimental and unusable.
I would't call nhc experimental; it's quite usable, at least for standard Haskell-98 stuff (plus some language extensions).
On a related topic: whatever happened to the compiler shootout? (Aside from dons leaving unsw)
Malcolm continues the tradition,
All result links are broken.

On 27/11/2008, at 8:35 AM, Andrew Coppin wrote:
Jake McArthur wrote:
Andrew Coppin wrote:
Don Stewart wrote:
Noteworthy, * lhc-20081121: “Lhc Haskell Compiler”
Interesting. I can't find out any information about this...
It is a fork of the JHC compiler, which should be easier to look up. There is also Hugs, as you mentioned. In addition, you may want to look at YHC and NHC.
Yeah, the "implementations" page on the Wiki basically says that there's GHC and Hugs, and there's also these things called YHC, NHC and JHC. All the documentation I've read makes these latter compilers sound highly experimental and unusable. (I don't recall specifically which of them, but I remember hearing it can't even compile the Prelude yet.) They seem like small projects which are probably interesting to hack with, but not much use if you're trying to produce production-grade compiled code to give to a customer...
OTOH, I haven't ever attempted to *use* any of these compilers. I only read about them...
Don't forget hbc. There's plenty of information about all the compilers in the history of haskell paper, including a timeline: http://research.microsoft.com/users/simonpj/papers/history-of-haskell/index.... Cheers, Bernie.

On Wed, Nov 26, 2008 at 03:29:43PM -0600, Jake McArthur wrote:
Interesting. I can't find out any information about this...
It is a fork of the JHC compiler, which should be easier to look up. There is also Hugs, as you mentioned. In addition, you may want to look at YHC and NHC.
Hmm.. This one is news to me. John -- John Meacham - ⑆repetae.net⑆john⑈

Hello Andrew,
On Wed, Nov 26, 2008 at 4:24 PM, Andrew Coppin
Don Stewart wrote:
Noteworthy, * lhc-20081121: "Lhc Haskell Compiler"
Interesting. I can't find out any information about this...
Here is the current homepage for the LHC project: http://lhc.seize.it/ Hope that helps. __ Donnie Jones

Donnie Jones wrote:
Here is the current homepage for the LHC project: http://lhc.seize.it/
Hope that helps.
Yes. I found that - it just didn't *say* very much. ;-) I guess like many small projects, they're too busy *doing* it to have time to document it.

On 27 Nov 2008, at 10:56 am, Andrew Coppin wrote:
Donnie Jones wrote:
Here is the current homepage for the LHC project: http://lhc.seize.it/ Yes. I found that - it just didn't *say* very much. ;-)
I really really wish there were just one more sentence on that page saying WHY there is a fork of JHC.

On Wed, Nov 26, 2008 at 6:19 PM, Richard O'Keefe
On 27 Nov 2008, at 10:56 am, Andrew Coppin wrote:
Donnie Jones wrote:
Here is the current homepage for the LHC project: http://lhc.seize.it/ Yes. I found that - it just didn't *say* very much. ;-)
I really really wish there were just one more sentence on that page saying WHY there is a fork of JHC.
I spoke with the author of the fork a bit in IRC around the time it happened and my understanding is that: 1) John sternly objects to using cabal as the build system for JHC 2) JHC was seeing very little development activity by John 3) The author of the fork has philosophically different ideas about project management I really hope JHC and LHC can continue to share code and are able to be collaborating projects instead of competing ones. We can see that LHC already has an increase in activity and the new team that is forming is very interested in clean up and factorization. That is, I've seen some good discussions about using libraries instead of project specific functionality between LHC contributors. I hope John doesn't take the fork as any sort of aggressive or insulting action. He's made a compiler that is sufficiently interesting to have users that want to take over. I'm not involved in either fork in either way, but it's quite interesting to watch and I can see parallels to a different Haskell project. Thanks, Jason

On Wed, Nov 26, 2008 at 07:20:12PM -0800, Jason Dagit wrote:
I spoke with the author of the fork a bit in IRC around the time it happened and my understanding is that: 1) John sternly objects to using cabal as the build system for JHC
This is a fairly silly reason to fork a project, especially jhc, for a number of reasons. It is important to me that jhc be as widely accessible at possible. The number of machines './configure && make install' will work on outnumbers those that cabal install will work on hundreds or thousands to one. I am pleased to have anyone experiment with jhc in the first place, I don't want to make things harder for my users. This alone would be enough of a reason all other things being equal, but other things arn't equal to boot. The quality of support I can provide is diminished with cabal. Someone tries to compile jhc, they get a moderately tricky build error. they send it to me, I take a look, figure out the workaround and release a new version that same day. one day turnaround. A bug is found in the way cabal does something. I track down the bug, hope it is something fixable, then further hope when I send a fix it is accepted. Maybe it takes a week or two. Now, do I release a new version of jhc that requires a development version of cabal? do I hold off and tell the user they need a personalized workaround? do I demand that to use jhc you have to use the latest cabal snapshots? Do I then have to support them when the latest snapshots break something else of theirs? In any case. it is not a situation I want to be in. Cabal just isn't elegant. let's put it in perspective, cabal has 4 times as many lines of code as mk (a superset of make)*. That is four times as many lines of haskell code as C. Given how much more dense and expressive haskell code is than C, that is a huge amount. Yet cabal can't handle what even mildly complicated make scripts can do. Cabal is not flexible. I decide I want to include a nice graph of code motion in jhc, so I add the following 2 lines to my makefile
%.pdf: %.dot dot $< -Tpdf -o$@
and done! my build system now understands how to create pdf documents from graph description files. now, I _could_ add support for this to cabal, I _could_ wait several months for the new version to propagate to users. And I _would_ fully expect to have to go through the whole process again the next time I want to do something slightly unorthodox. Cabal is just a huge dependency I don't need. Every dependency I add to a project is a bigger hassle for my users and for me. A fairly complicated dependency like cabal would have to have fairly compelling benefits. Now, I am saying these things so people don't think I am just being stubborn. I have valid, compelling, and logical reasons to not want to use cabal. I think it is the wrong tool for the job and it is that simple. If you want me to switch to cabal, address its issues, and then _in addition_ add a killer feature on top to put it ahead of other systems to make the work involved in switching worth it. I have a goal with jhc, to write a kick ass haskell compiler. Not to fight a build system that is not suited to my task and that made some dubious design decisions. Not to promote an agenda. And before you respond, think about this. What if the ghc developers were constantly bombarded with whining from the perl people that ghc doesn't use MakeMaker.pm since ghc uses perl in the evil demangler? What would your impression of the perl community be? What if people kept trying to convince _you_ to rewrite your haskell project in java _and_ provide support for it because "they never had to use referential transparency, so it can't be that important to you". Sometimes that is what it feels like, which is disapointing from this community. We all came to haskell because we thought it was the better choice at some point. The hegemony was pushing java, C++, or worse but we didn't bite (or at least were still hungry). Just because something is popular, it doesn't mean it is good, just because it is written in haskell, it doesn't mean it is elegant. So don't begrudge me for holding out for something better. Perhaps franchise will be it, perhaps some future version of cabal, perhaps nothing will replace make/autoconfs sweet spot (though I would hope there is still some innovation in this area the OSS community can explore).
I hope John doesn't take the fork as any sort of aggressive or insulting action. He's made a compiler that is sufficiently interesting to have users that want to take over.
I am still actively working on jhc for the record. Actual code checkin tends to be very spurty, but don't think the project is dead. More in a design phase than anything else. There is no surer way to instigate another spurt than by submitting some patches or bringing up discussion of an interesting topic or paper on the jhc mailing list. John * if you include the source of all the libraries mk depends on, cabal still has twice as many lines. -- John Meacham - ⑆repetae.net⑆john⑈

john:
On Wed, Nov 26, 2008 at 07:20:12PM -0800, Jason Dagit wrote:
I spoke with the author of the fork a bit in IRC around the time it happened and my understanding is that: 1) John sternly objects to using cabal as the build system for JHC
This is a fairly silly reason to fork a project, especially jhc, for a number of reasons.
One of the reasons though, for the branching, is that the potential developers, who all have Haskell toolchains, couldn't do: $ cabal install jhc Then now can, but have to write 'lhc' instead of 'jhc'. We've probably just increased the jhc "alpha user" base 10 fold. Hooray! Integrating into the ecology of the vast majority of Haskell code is a good way to get and keep developers. And since GHC -- which we need to build JHC anyway -- already ships with Cabal, no additional dependencies are required. Looks like win to me. -- Don

On Fri, Nov 28, 2008 at 07:41:42PM -0800, Don Stewart wrote:
john:
On Wed, Nov 26, 2008 at 07:20:12PM -0800, Jason Dagit wrote:
I spoke with the author of the fork a bit in IRC around the time it happened and my understanding is that: 1) John sternly objects to using cabal as the build system for JHC
This is a fairly silly reason to fork a project, especially jhc, for a number of reasons.
One of the reasons though, for the branching, is that the potential developers, who all have Haskell toolchains, couldn't do:
$ cabal install jhc
Then now can, but have to write 'lhc' instead of 'jhc'.
We've probably just increased the jhc "alpha user" base 10 fold. Hooray!
Except that for all those systems that can use cabal, ./configure && make install would have already worked perfectly. So in actuality my alpha user base drops 50-fold. Also, I am not so sure who these people are who are willing to type 10 characters to try out jhc, but not a dozen more. I mean, a few typos and there won't be enough keystrokes in their budget to compile hello world, let alone provide a bug report or send a patch :) I think you are overestimating the penetration of cabal or underestimating the size and diversity of the haskell user base. There are a whole lot of people out there who just want to use haskell and don't keep up with the IRC channels or the mailing lists. Grad students interested in some aspect of jhcs design who did apt-get install ghc and then expect jhc to work. Sysadmins who manage clusters of computers for work but have no particular attachement to haskell whose kickstart scripts allow just dropping in an autoconfed tarball but have to be retooled for something new?
Integrating into the ecology of the vast majority of Haskell code is a good way to get and keep developers. And since GHC -- which we need to build JHC anyway -- already ships with Cabal, no additional dependencies are required.
But wouldn't it be nicer if Haskell fit into the ecology of OSS in general? Even better wouldn't it be nice if peoples first impression of haskell was not annoyance at having to build a package in some proprietary way , but rather being impressed with some piece of software and looking into its implementation and seeing how it got to be so good? No one when just trying to install a random program not knowing anything about the implementation gets excited at seeing that they have to learn some brand new way of getting it to work. For a standalone program like jhc, integrating with the open source community as a whole, and having the flexibility of working with the right tool for the task at hand are very desirable things. When it comes down to it, an actual reason to use cabal is not there, If the reason is to fit into the ecology of Haskell code, then my question is why is this ecology so distinct to begin with? What is wrong with haskell such that its world must be so disjoint from that of other languages? That seems to be the real WTF here that needs fixing. John -- John Meacham - ⑆repetae.net⑆john⑈

On Sat, Nov 29, 2008 at 2:41 AM, John Meacham
On Fri, Nov 28, 2008 at 07:41:42PM -0800, Don Stewart wrote:
john:
On Wed, Nov 26, 2008 at 07:20:12PM -0800, Jason Dagit wrote:
I spoke with the author of the fork a bit in IRC around the time it happened and my understanding is that: 1) John sternly objects to using cabal as the build system for JHC
This is a fairly silly reason to fork a project, especially jhc, for a number of reasons.
One of the reasons though, for the branching, is that the potential developers, who all have Haskell toolchains, couldn't do:
$ cabal install jhc
Then now can, but have to write 'lhc' instead of 'jhc'.
We've probably just increased the jhc "alpha user" base 10 fold. Hooray!
Except that for all those systems that can use cabal, ./configure && make install would have already worked perfectly. So in actuality my alpha user base drops 50-fold.
Also, I am not so sure who these people are who are willing to type 10 characters to try out jhc, but not a dozen more. I mean, a few typos and there won't be enough keystrokes in their budget to compile hello world, let alone provide a bug report or send a patch :)
I think you are overestimating the penetration of cabal or underestimating the size and diversity of the haskell user base. There are a whole lot of people out there who just want to use haskell and don't keep up with the IRC channels or the mailing lists. Grad students interested in some aspect of jhcs design who did apt-get install ghc and then expect jhc to work. Sysadmins who manage clusters of computers for work but have no particular attachement to haskell whose kickstart scripts allow just dropping in an autoconfed tarball but have to be retooled for something new?
Integrating into the ecology of the vast majority of Haskell code is a good way to get and keep developers. And since GHC -- which we need to build JHC anyway -- already ships with Cabal, no additional dependencies are required.
But wouldn't it be nicer if Haskell fit into the ecology of OSS in general? Even better wouldn't it be nice if peoples first impression of haskell was not annoyance at having to build a package in some proprietary way , but rather being impressed with some piece of software and looking into its implementation and seeing how it got to be so good? No one when just trying to install a random program not knowing anything about the implementation gets excited at seeing that they have to learn some brand new way of getting it to work.
For a standalone program like jhc, integrating with the open source community as a whole, and having the flexibility of working with the right tool for the task at hand are very desirable things.
When it comes down to it, an actual reason to use cabal is not there, If the reason is to fit into the ecology of Haskell code, then my question is why is this ecology so distinct to begin with? What is wrong with haskell such that its world must be so disjoint from that of other languages? That seems to be the real WTF here that needs fixing.
When it comes down to it, I've just been down a slippery slope. The fact is, hackage works and hackage is a good reason to support cabal. I'd also so say this thread is no longer productive. A fork happened, the fork embraces cabal but jhc does not need to embrace cabal; end of story really. We all get what we want. Thanks, Jason

Am Samstag, 29. November 2008 11:41 schrieb John Meacham:
On Fri, Nov 28, 2008 at 07:41:42PM -0800, Don Stewart wrote:
john:
On Wed, Nov 26, 2008 at 07:20:12PM -0800, Jason Dagit wrote:
I spoke with the author of the fork a bit in IRC around the time it happened and my understanding is that: 1) John sternly objects to using cabal as the build system for JHC
This is a fairly silly reason to fork a project, especially jhc, for a number of reasons.
One of the reasons though, for the branching, is that the potential developers, who all have Haskell toolchains, couldn't do:
$ cabal install jhc
Then now can, but have to write 'lhc' instead of 'jhc'.
We've probably just increased the jhc "alpha user" base 10 fold. Hooray!
Yes, that's very nice to be able to just type $ cabal update $ cabal install whatever and cabal automatically takes care of dependencies (unfortunately only Haskell dependencies, but hey, can't expect real magic), while the configure && make build system requires me to do it all myself (which becomes a real PITA when there are many dependencies not yet installed).
Also, I am not so sure who these people are who are willing to type 10 characters to try out jhc, but not a dozen more.
I doubt a few dozen keystrokes make a difference to those who are willing to try out jhc, but chasing dependencies could make a difference. Fortunately jhc hasn't many, so I was ready to try both methods. 1. cabal install lhc 20 minutes later I have an lhc executable installed (and the graphviz package), great, can't be any simpler. Unfortunately: $ lhc -o lhcHeap heapMain lhc: user error (LibraryMap: Library base not found!) Oops. 2. grab jhc, configure && make I need ghc >= 6.8.2, yes, 6.8.3 DrIFT, yes, 2.2.3 binary, yes zlib, yes Great, nothing I don't already have, so download the source tarball, unpack and ./configure --prefix=$HOME checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes ... more configure output ... checking for drift-ghc... no configure: error: DrIFT not found get it from http://repetae.net/computer/haskell/DrIFT/ Huh? dafis@linux:~/jhc/jhc-0.5.20080307> which DrIFT /home/dafis/.cabal/bin/DrIFT dafis@linux:~/jhc/jhc-0.5.20080307> DrIFT --version Version DrIFT-2.2.3 Okay, ./configure --help and searching through the configure script (which I completely don't know the syntax of) lead me to try ./configure --prefix=$HOME DRIFTGHC=/home/dafis/.cabal/bin/DrIFT which successsfully completes the configure step, but shouldn't configure find executables in the path? Now make. Lots of warnings, but doesn't fail. make install, okay, ready to go. helloWorld works, good, but dumps thousands of lines of output I don't want. How do I tell jhc to shut up, and why isn't that the default? Now something a bit harder, make me a couple of primes. First problem: import System.Environment (getArgs) Error: Module not found: System.Environment Okay, different organisation of base package, mildly annoying, use import System (getArgs). Now ... myriads of lines of output ... jhc: user error (Grin.FromE - Unknown primitive: ("eqRef__",[EVar (6666::ELit (Data.IORef.Ref__ (ELit (Jhc@.Box.*::ESort *))::ESort #)),EVar (6670::ELit (Data.IORef.Ref__ (ELit (Jhc@.Box.*::ESort *))::ESort #))])) What? And I get the same error for every nontrivial programme I tried to compile, but not for a couple of trivial programmes. So I spent a few of hours to litter my hard disk with a completely useless broken lhc and a basically useless broken jhc :( Conclusion: the cabal package is much easier to install, but in this case the result of configure && make is marginally more useful. However, neither produced a working compiler, so no cake. Both systems suck when they fail. Cheers, Daniel

Hello Daniel, Sunday, November 30, 2008, 1:41:03 AM, you wrote:
Yes, that's very nice to be able to just type $ cabal update $ cabal install whatever and cabal automatically takes care of dependencies (unfortunately only Haskell
i have to mention that there are no haskell compilers that work this way. may be this say something important about Cabal? ;) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

bulat.ziganshin:
Hello Daniel,
Sunday, November 30, 2008, 1:41:03 AM, you wrote:
Yes, that's very nice to be able to just type $ cabal update $ cabal install whatever and cabal automatically takes care of dependencies (unfortunately only Haskell
i have to mention that there are no haskell compilers that work this way. may be this say something important about Cabal? ;)
Though there's a perl compiler, $ cabal install pugs Enjoy! -- Don (who thinks we need less talk, more action)

Hi Daniel,
1. cabal install lhc 20 minutes later I have an lhc executable installed (and the graphviz package), great, can't be any simpler.
Awesome! Glad it worked for you. A tidbit: unfortunately, due to a mistake in the first upload of lhc, you will need to provide an exact version if you want the latest and greatest. The reason behind this is because when David uploaded lhc initially, he gave it a version of '20081121'. After a few days of hacking the source code, I decided to upload a new version - but I changed the version number to 0.6.20081127 (it is 0.6 because jhc is currently at 0.5, and I see the improvements we made as worthy of a minor version bump.) So, as far as cabal is concerned, 20081121 > 0.6.20081127, so it will by default install the older version. If you would be so kind as to try the latest lhc instead by running: $ cabal install lhc-0.6.20081127 And reporting back, I would like to hear the results and if it went well. :)
Unfortunately: $ lhc -o lhcHeap heapMain lhc: user error (LibraryMap: Library base not found!)
Oops.
There is a reason this is happening, and there isn't an easy way to get around it right now, it seems. The problem is that when you just install lhc, it has no libraries. To install the base library, you are going to need a copy of the lhc source code - this cannot be automated by hackage. Why? Because we are afraid that by uploading lhc's version of base - simply called 'base' - to hackage, will will inadvertantly stop every continually uploaded package from building, and cabal install could stop working too. Scary thought, huh? :) The easiest way to fix this problem is by doing the following: 1. You probably want the darcs version of LHC anyway if you're willing to try it. Good improvements are being made pretty much every day. 2. After you get the darcs repository, just go into it and do 'cabal install' 3. To install base, you are probably going to want the latest versions of both cabal and cabal-install from the darcs repository - they include support for LHC already (cabal 1.7.x.) 4. After you've installed lhc and the latest cabal/cabal install, you can just do: $ cd lhc/lib/base $ cabal install --lhc And it should Just Work. All of these instructions can be found here: http://lhc.seize.it/#development Don Stewart just brought up this point in #haskell, so I think I will modify the wiki page a bit (http://lhc.seize.it) and highlight these notes and why it's currently like this. I apologize for it being so cumbersome right now. We're trying to figure out a good solution.
Okay, ./configure --help and searching through the configure script (which I completely don't know the syntax of) lead me to try ./configure --prefix=$HOME DRIFTGHC=/home/dafis/.cabal/bin/DrIFT which successsfully completes the configure step, but shouldn't configure find executables in the path?
The reason is because the configure.ac script is designed to search for an executable named 'drift-ghc', not 'DrIFT'. I have no idea why.
import System (getArgs). Now ... myriads of lines of output ... jhc: user error (Grin.FromE - Unknown primitive: ("eqRef__",[EVar (6666::ELit (Data.IORef.Ref__ (ELit (Jhc@.Box.*::ESort *))::ESort #)),EVar (6670::ELit (Data.IORef.Ref__ (ELit (Jhc@.Box.*::ESort *))::ESort #))]))
What? And I get the same error for every nontrivial programme I tried to compile, but not for a couple of trivial programmes.
LHC and JHC are still extremely incomplete. They're nowhere near as supportive of extensions or libraries as GHC is. Don't count on them compiling anything non-trivial just yet. Austin

Am Sonntag, 30. November 2008 00:17 schrieb Austin Seipp:
If you would be so kind as to try the latest lhc instead by running:
$ cabal install lhc-0.6.20081127
And reporting back, I would like to hear the results and if it went well. :)
Got and installed a lot of dependencies and the latest and greatest lhc in 20 minutes (may have been 21, I didn't time it with a stop-watch) again :)
Unfortunately: $ lhc -o lhcHeap heapMain lhc: user error (LibraryMap: Library base not found!)
Oops.
There is a reason this is happening, and there isn't an easy way to get around it right now, it seems.
The problem is that when you just install lhc, it has no libraries. To install the base library, you are going to need a copy of the lhc source code - this cannot be automated by hackage.
Why? Because we are afraid that by uploading lhc's version of base - simply called 'base' - to hackage, will will inadvertantly stop every continually uploaded package from building, and cabal install could stop working too. Scary thought, huh? :)
Fair enough. Might be good to advertise that on Hackage, though.
The easiest way to fix this problem is by doing the following:
1. You probably want the darcs version of LHC anyway if you're willing to try it. Good improvements are being made pretty much every day. 2. After you get the darcs repository, just go into it and do 'cabal install'
dafis@linux:~/lhc> darcs get --partial http://code.haskell.org/lhc Invalid repository: http://code.haskell.org/lhc darcs failed: failed to fetch: http://code.haskell.org/lhc/_darcs/inventory ExitFailure 1 There's a hashed_inventory in lhc/_darcs, but no inventory. Is that a darcs2 vs. darcs1 incompatibility and I'm just screwed or is the repo broken?
3. To install base, you are probably going to want the latest versions of both cabal and cabal-install from the darcs repository - they include support for LHC already (cabal 1.7.x.)
Okay, done. Though for some reason cabal --version says cabal-install version 0.6.0 using version 1.6.0.1 of the Cabal library even though I changed the constraint to Cabal >= 1.7 in the cabal-install.cabal file?
4. After you've installed lhc and the latest cabal/cabal install, you can just do: $ cd lhc/lib/base $ cabal install --lhc
And it should Just Work.
Will try if I can darcs get the repo :-/
All of these instructions can be found here:
http://lhc.seize.it/#development
Don Stewart just brought up this point in #haskell, so I think I will modify the wiki page a bit (http://lhc.seize.it) and highlight these notes and why it's currently like this.
I apologize for it being so cumbersome right now. We're trying to figure out a good solution.
Okay, ./configure --help and searching through the configure script (which I completely don't know the syntax of) lead me to try ./configure --prefix=$HOME DRIFTGHC=/home/dafis/.cabal/bin/DrIFT which successsfully completes the configure step, but shouldn't configure find executables in the path?
The reason is because the configure.ac script is designed to search for an executable named 'drift-ghc', not 'DrIFT'. I have no idea why.
import System (getArgs). Now ... myriads of lines of output ... jhc: user error (Grin.FromE - Unknown primitive: ("eqRef__",[EVar (6666::ELit (Data.IORef.Ref__ (ELit (Jhc@.Box.*::ESort *))::ESort #)),EVar (6670::ELit (Data.IORef.Ref__ (ELit (Jhc@.Box.*::ESort *))::ESort #))]))
What? And I get the same error for every nontrivial programme I tried to compile, but not for a couple of trivial programmes.
LHC and JHC are still extremely incomplete. They're nowhere near as supportive of extensions or libraries as GHC is. Don't count on them compiling anything non-trivial just yet.
No extensions and libraries, just a bit of Haskell98, the implicit heap from http://www.haskell.org/haskellwiki/Primes with a main that prints the n-th prime. I wouldn't expect many extensions yet, but most of H98.
Austin
Cheers, Daniel

Hi Daniel, On Sun, Nov 30, 2008 at 08:31:15 -0500, haskell-cafe-request@haskell.org wrote:
dafis@linux:~/lhc> darcs get --partial http://code.haskell.org/lhc Invalid repository: http://code.haskell.org/lhc
darcs failed: failed to fetch: http://code.haskell.org/lhc/_darcs/inventory ExitFailure 1
There's a hashed_inventory in lhc/_darcs, but no inventory. Is that a darcs2 vs. darcs1 incompatibility and I'm just screwed or is the repo broken?
There are two issues here. One is that the LHC repository is indeed a darcs 2 repository and you appear to have darcs 1 on your machine. I think upgrading to darcs 2 would be a very good idea, and I'm sure the darcs community would be happy to help you with this. The second issue is that our forward-compatibility detector was likely buggy in darcs 1.0.9. The response darcs 1.0.9 should have given you was something like this: darcs failed: Can't understand repository format: hashed darcs failed: Can't understand repository format: darcs-2 Unfortunately, the relationship between hashed and darcs 2 repositories is slightly confusing. Basically you need a darcs 2 client if you see hashed_inventory, but it does not necessarily mean there is an incompatibility. I hope that the following snippet from the darcs 2.1.0 release announcement can help clear this up: What should I do? ----------------- Upgrade! Binary versions should be available shortly, either from your favourite package distributor or by third party contributors. Other than installing the new darcs, no action is required on your part to perform this upgrade. Darcs 2, including this particular version, is 100% compatible with your pre-existing repositories. If you have not done so already, you should consider using the hashed repository format in place of your current old-fashioned repositories. This format offers greater protection against accidental corruption, better support for case insensitive file systems. It also provides some very nice performance features, including lazy fetching of patches and a global cache (both optional). If darcs 1 compatibility is not a concern, you could also upgrade your repositories all the way to the darcs 2 format. In addition to the robustness and performance features above, this gives you the improved merging semantics and conflicts handling that give darcs 2 its name. More details about upgrading to darcs 2 here: http://wiki.darcs.net/index.html/DarcsTwo Another clarification --------------------- To be clear, we say that hashed repositories are backward-compatible. This means that darcs 2 clients can pull and push patches between them and old-fashioned repostiories. On the other hand, interacting with the hashed repositories themselves requires a darcs 2 client. Thanks! -- Eric Kow http://www.nltg.brighton.ac.uk/home/Eric.Kow PGP Key ID: 08AC04F9

Am Sonntag, 30. November 2008 15:57 schrieb Eric Kow:
Hi Daniel,
On Sun, Nov 30, 2008 at 08:31:15 -0500, haskell-cafe-request@haskell.org wrote:
dafis@linux:~/lhc> darcs get --partial http://code.haskell.org/lhc Invalid repository: http://code.haskell.org/lhc
darcs failed: failed to fetch: http://code.haskell.org/lhc/_darcs/inventory ExitFailure 1
There's a hashed_inventory in lhc/_darcs, but no inventory. Is that a darcs2 vs. darcs1 incompatibility and I'm just screwed or is the repo broken?
There are two issues here. One is that the LHC repository is indeed a darcs 2 repository
Yes, and as Austin told me, it was updated to darcs-2 format yesterday or the day before (depending on time-zone), so one might just consider it bad timing, except
and you appear to have darcs 1 on your machine. I think upgrading to darcs 2 would be a very good idea, and I'm sure the darcs community would be happy to help you with this.
I am now a proud owner of darcs-2.1.2, the source distribution built without problems :), make test said "All tests successful!" three times :D Sorry to deprive you of the pleasure of helping. I had problems with a darcs-2 prerelease earlier, that's why I was pessimistic about getting darcs-2 to work on my old system, the binary wouldn't work because it needed newer libs than I have (and updating those would wreak havoc on other components), and the one from the darcs repo needed a newer autoconf than I have, yay for source releases.
The second issue is that our forward-compatibility detector was likely buggy in darcs 1.0.9. The response darcs 1.0.9 should have given you was something like this:
darcs failed: Can't understand repository format: hashed darcs failed: Can't understand repository format: darcs-2
I've always found darcs' predictive powers somewhat lacking. Seriously, it might be a good idea to have an error message like darcs failed: Can't understand repository format, may be new format or broken. because that would give a starting point for resolving the problem (darcs-2 was conspicuous enough here that I thought of that possibility, but maybe darcs users not suscribed to the relevant lists wouldn't).
Unfortunately, the relationship between hashed and darcs 2 repositories is slightly confusing. Basically you need a darcs 2 client if you see hashed_inventory, but it does not necessarily mean there is an incompatibility. I hope that the following snippet from the darcs 2.1.0 release announcement can help clear this up:
What should I do? ----------------- Upgrade! Binary versions should be available shortly, either from your favourite package distributor or by third party contributors.
Please, do not drop source distributions anyone, binaries tend to work on a much smaller set of systems, source can build against more library versions. Cheers, Daniel

On Sun, Nov 30, 2008 at 17:45:55 +0100, Daniel Fischer wrote:
I am now a proud owner of darcs-2.1.2, the source distribution built without problems :), make test said "All tests successful!" three times :D Sorry to deprive you of the pleasure of helping.
:-)
darcs failed: Can't understand repository format: hashed darcs failed: Can't understand repository format: darcs-2
I've always found darcs' predictive powers somewhat lacking. Seriously, it might be a good idea to have an error message like
This was just a bug in darcs. The feature has been there for a while, but was not sufficiently well tested. Anyway, it's been fixed since May.
darcs failed: Can't understand repository format, may be new format or broken.
I think we would welcome some kind of patch to rephrase the messages above to hint that that the reason it could not understand these format elements was because they may be new.
Upgrade! Binary versions should be available shortly, either from your favourite package distributor or by third party contributors.
Please, do not drop source distributions anyone, binaries tend to work on a much smaller set of systems, source can build against more library versions.
The darcs team will always release a source tarball on its releases (the next one scheduled for mid January) Cheers, -- Eric Kow http://www.nltg.brighton.ac.uk/home/Eric.Kow PGP Key ID: 08AC04F9

On Sat, Nov 29, 2008 at 11:41:03PM +0100, Daniel Fischer wrote:
Great, nothing I don't already have, so download the source tarball, unpack and ./configure --prefix=$HOME checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes ... more configure output ... checking for drift-ghc... no configure: error: DrIFT not found get it from http://repetae.net/computer/haskell/DrIFT/
Huh? dafis@linux:~/jhc/jhc-0.5.20080307> which DrIFT /home/dafis/.cabal/bin/DrIFT dafis@linux:~/jhc/jhc-0.5.20080307> DrIFT --version Version DrIFT-2.2.3
Oh golly. I never put DrIFT on cabal, apparently whomever tried to cabalize it didn't include the ghc driver script, and also appeared to just drop the documentation from the package altogether. It is things like that that make it very hard to get behind cabal, why was DrIFT crippled just so it can be put on cabal? If cabal wasn't powerful enough to compile DrIFT, and we already had a perfectly good way of compiling it, why the need to shoehorn it in and cause this problem? sigh. Incidentally, the jhc tarball no longer needs drift to compile (you only need it if you compile from the darcs repo directly) so I'll get rid of that check. John -- John Meacham - ⑆repetae.net⑆john⑈

john:
On Sat, Nov 29, 2008 at 11:41:03PM +0100, Daniel Fischer wrote:
Great, nothing I don't already have, so download the source tarball, unpack and ./configure --prefix=$HOME checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes ... more configure output ... checking for drift-ghc... no configure: error: DrIFT not found get it from http://repetae.net/computer/haskell/DrIFT/
Huh? dafis@linux:~/jhc/jhc-0.5.20080307> which DrIFT /home/dafis/.cabal/bin/DrIFT dafis@linux:~/jhc/jhc-0.5.20080307> DrIFT --version Version DrIFT-2.2.3
Oh golly. I never put DrIFT on cabal, apparently whomever tried to cabalize it didn't include the ghc driver script, and also appeared to just drop the documentation from the package altogether. It is things like that that make it very hard to get behind cabal, why was DrIFT crippled just so it can be put on cabal? If cabal wasn't powerful enough to compile DrIFT, and we already had a perfectly good way of compiling it, why the need to shoehorn it in and cause this problem? sigh.
Sounds like a problem with the packaging of DrIFT for Hackage, not with Cabal per se. This can happen if the package author doesn't do the conversion from ad-hoc make systems to cabal -- metadata that was implicit in autoconf+make can be lost. Perhaps the DrIFT maintainer could package it correctly, so that it can be used with the ~1000 other libraries on Hackage. -- Don

On Sat, Nov 29, 2008 at 05:10:24PM -0800, Don Stewart wrote:
Oh golly. I never put DrIFT on cabal, apparently whomever tried to cabalize it didn't include the ghc driver script, and also appeared to just drop the documentation from the package altogether. It is things like that that make it very hard to get behind cabal, why was DrIFT crippled just so it can be put on cabal? If cabal wasn't powerful enough to compile DrIFT, and we already had a perfectly good way of compiling it, why the need to shoehorn it in and cause this problem? sigh.
Sounds like a problem with the packaging of DrIFT for Hackage, not with Cabal per se. This can happen if the package author doesn't do the conversion from ad-hoc make systems to cabal -- metadata that was implicit in autoconf+make can be lost.
This is indicative of problems with some factions of the cabal community in general though. That somehow the idea of getting a package into cabal was more important than the package actually working. Like it or not, the cabal project has accumulated some supporters who have an agenda where promoting cabal is more important than the actual value of cabal to a project. This has made for a very hostile enviornment to be a developer in.
Perhaps the DrIFT maintainer could package it correctly, so that it can be used with the ~1000 other libraries on Hackage.
DrIFT is maintained by me and already can be used with the ~1000 other libraries on hackage. John -- John Meacham - ⑆repetae.net⑆john⑈

On 2008 Nov 29, at 20:02, John Meacham wrote:
Oh golly. I never put DrIFT on cabal, apparently whomever tried to cabalize it didn't include the ghc driver script, and also appeared to just drop the documentation from the package altogether. It is things like that that make it very hard to get behind cabal, why was DrIFT crippled just so it can be put on cabal? If cabal wasn't powerful enough to compile DrIFT, and we already had a perfectly good way of compiling it, why the need to shoehorn it in and cause this problem? sigh.
Blaming Cabal for Audrey doing a quick-and-dirty translation (because she didn't have a whole lot of time to spend online and wasn't really familiar with Cabal or Hackage) is just digging for excuses. -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

On Sat, Nov 29, 2008 at 09:00:48PM -0500, Brandon S. Allbery KF8NH wrote:
On 2008 Nov 29, at 20:02, John Meacham wrote:
Oh golly. I never put DrIFT on cabal, apparently whomever tried to cabalize it didn't include the ghc driver script, and also appeared to just drop the documentation from the package altogether. It is things like that that make it very hard to get behind cabal, why was DrIFT crippled just so it can be put on cabal? If cabal wasn't powerful enough to compile DrIFT, and we already had a perfectly good way of compiling it, why the need to shoehorn it in and cause this problem? sigh.
Blaming Cabal for Audrey doing a quick-and-dirty translation (because she didn't have a whole lot of time to spend online and wasn't really familiar with Cabal or Hackage) is just digging for excuses.
Hmm? This wasn't done by Audrey, all the hackage/cabal stuff she has done for my projects has been with my blessing. (and thanks) And creating a crippled version of something you wrote and passing it off as the original, in a way that clearly breaks things for other people definitely is something to get upset about. And no, that is not a technical problem with cabal itself, but it does make me worry about the motivations of some in the project when that sort of breakage seemed like a good idea to someone. John -- John Meacham - ⑆repetae.net⑆john⑈

john:
On Sat, Nov 29, 2008 at 09:00:48PM -0500, Brandon S. Allbery KF8NH wrote:
On 2008 Nov 29, at 20:02, John Meacham wrote:
Oh golly. I never put DrIFT on cabal, apparently whomever tried to cabalize it didn't include the ghc driver script, and also appeared to just drop the documentation from the package altogether. It is things like that that make it very hard to get behind cabal, why was DrIFT crippled just so it can be put on cabal? If cabal wasn't powerful enough to compile DrIFT, and we already had a perfectly good way of compiling it, why the need to shoehorn it in and cause this problem? sigh.
Blaming Cabal for Audrey doing a quick-and-dirty translation (because she didn't have a whole lot of time to spend online and wasn't really familiar with Cabal or Hackage) is just digging for excuses.
Hmm? This wasn't done by Audrey, all the hackage/cabal stuff she has done for my projects has been with my blessing. (and thanks)
And creating a crippled version of something you wrote and passing it off as the original, in a way that clearly breaks things for other people definitely is something to get upset about. And no, that is not a technical problem with cabal itself, but it does make me worry about the motivations of some in the project when that sort of breakage seemed like a good idea to someone.
Looks like it was packaged by gwern when he was trawling the archives finding releases that weren't on Hackage -- and yes, several of the things he uploaded were incorrectly packages in some way or another. If I grab drift from Hackage though, $ cabal install drift Resolving dependencies... Configuring DrIFT-2.2.3... Preprocessing executables for DrIFT-2.2.3... Building DrIFT-2.2.3... Linking dist/build/DrIFT/DrIFT ... Installing executable(s) in /home/dons/.cabal/bin And it runs well enough for the other projects I've used it on. $ DrIFT -V Version DrIFT-2.2.3 Is the cabal distribution just missing extra scripts? There's a fine line between "packaging" in the distro sense, that cabal does (where only metadata is added), and an actual fork. In this case, it looks like only metadata was added, so it's no different to any number of distro packages. Are you interested in ensuring the cabal file accurately describes the package as you wish it to be installed? -- Don

On Sun, Nov 30, 2008 at 08:50:51AM -0800, John Meacham wrote:
And creating a crippled version of something you wrote and passing it off as the original, in a way that clearly breaks things for other people definitely is something to get upset about.
There was a discussion of this issue on the libraries list in June/July, resulting in an agreed policy being added to the hackage upload page, including: If a package is being maintained, any release not approved and supported by the maintainer should use a different package name. I am also willing to remove any release with an unchanged name and made without the support of the maintainer. You have made clear that the DrIFT-2.2.3 upload is in that category, so I have now removed it. Looking through, the only other package I spotted that might be such is HsASA-0.1 -- please let me know about this one, and any others I've missed.

On Mon, Dec 01, 2008 at 01:02:40AM +0000, Ross Paterson wrote:
I am also willing to remove any release with an unchanged name and made without the support of the maintainer. You have made clear that the DrIFT-2.2.3 upload is in that category, so I have now removed it. Looking through, the only other package I spotted that might be such is HsASA-0.1 -- please let me know about this one, and any others I've missed.
No, that one is fine. I don't have any issues with my haskell _libraries_ being packaged up if done properly. Cabal has signifigantly more utility for libraries than programs so the scales tip in its direction more often then. Speaking of which, it would be nice if they were seperated into two different sections on hackage. A few projects that have both library and programs would be listed in both categories. John -- John Meacham - ⑆repetae.net⑆john⑈

On Sat, Nov 29, 2008 at 8:02 PM, John Meacham
On Sat, Nov 29, 2008 at 11:41:03PM +0100, Daniel Fischer wrote:
Great, nothing I don't already have, so download the source tarball, unpack and ./configure --prefix=$HOME checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes ... more configure output ... checking for drift-ghc... no configure: error: DrIFT not found get it from http://repetae.net/computer/haskell/DrIFT/
Huh? dafis@linux:~/jhc/jhc-0.5.20080307> which DrIFT /home/dafis/.cabal/bin/DrIFT dafis@linux:~/jhc/jhc-0.5.20080307> DrIFT --version Version DrIFT-2.2.3
Oh golly. I never put DrIFT on cabal, apparently whomever tried to cabalize it didn't include the ghc driver script, and also appeared to just drop the documentation from the package altogether. It is things like that that make it very hard to get behind cabal, why was DrIFT crippled just so it can be put on cabal? If cabal wasn't powerful enough to compile DrIFT, and we already had a perfectly good way of compiling it, why the need to shoehorn it in and cause this problem? sigh. ... John
Thought I'd mention that http://hackage.haskell.org/package/DrIFT-cabalized 2.2.3.1 includes a drift-ghc.hs (compiles to /home/gwern/bin/bin/DrIFT-cabalized-ghc) which is a clone of the drift-ghc.in shell script you allude to: import Data.List (isInfixOf) import System.Cmd (rawSystem) import System.Environment (getArgs) import System.Exit (ExitCode(ExitSuccess)) import Paths_DrIFT_cabalized (getBinDir) main :: IO ExitCode main = do args <- getArgs case args of (a:b:c:[]) -> conditional a b c _ -> error "This is a driver script allowing DrIFT to be used seamlessly with ghc.\n \ \ in order to use it, pass '-pgmF drift-ghc -F' to ghc when compiling your programs." conditional :: FilePath -> FilePath -> FilePath -> IO ExitCode conditional orgnl inf outf = do prefix <- getBinDir infile <- readFile inf if "{-!" `isInfixOf` infile then do putStrLn (prefix ++ "DriFT-cabalized " ++ inf ++ " -o " ++ outf) rawSystem inf ["-o", outf] else do writeFile outf ("{-# LINE 1 \"" ++ orgnl ++ " #-}") readFile inf >>= appendFile outf return ExitSuccess {- GHC docs say: "-pgmF cmd Use cmd as the pre-processor (with -F only). Use -pgmF cmd to select the program to use as the preprocessor. When invoked, the cmd pre-processor is given at least three arguments on its command-line: 1. the first argument is the name of the original source file, 2. the second is the name of the file holding the input 3. third is the name of the file where cmd should write its output to." -} John: I would appreciate you pointing out if I have made a mistake anywhere in that and not actually replicated the functionality of the shell script. (I think I have, but I didn't really understand what the first echo was supposed to do and just copied its functionality.) -- gwern

On Fri, Nov 28, 2008 at 7:30 PM, John Meacham
On Wed, Nov 26, 2008 at 07:20:12PM -0800, Jason Dagit wrote:
I spoke with the author of the fork a bit in IRC around the time it happened and my understanding is that: 1) John sternly objects to using cabal as the build system for JHC
This is a fairly silly reason to fork a project, especially jhc, for a number of reasons.
It is important to me that jhc be as widely accessible at possible. The number of machines './configure && make install' will work on outnumbers those that cabal install will work on hundreds or thousands to one. I am pleased to have anyone experiment with jhc in the first place, I don't want to make things harder for my users. This alone would be enough of a reason all other things being equal, but other things arn't equal to boot.
The command './configure && make install' only works in Windows if the user bother to install some form of unix environment emulation like msys or cygwin. I don't know if windows platform support matters to jhc, but if it does that's one reason to want to provide an alternative to the autotools build option.
The quality of support I can provide is diminished with cabal. Someone tries to compile jhc, they get a moderately tricky build error. they send it to me, I take a look, figure out the workaround and release a new version that same day. one day turnaround. A bug is found in the way cabal does something. I track down the bug, hope it is something fixable, then further hope when I send a fix it is accepted. Maybe it takes a week or two. Now, do I release a new version of jhc that requires a development version of cabal? do I hold off and tell the user they need a personalized workaround? do I demand that to use jhc you have to use the latest cabal snapshots? Do I then have to support them when the latest snapshots break something else of theirs? In any case. it is not a situation I want to be in.
Cabal just isn't elegant. let's put it in perspective, cabal has 4 times as many lines of code as mk (a superset of make)*. That is four times as many lines of haskell code as C. Given how much more dense and expressive haskell code is than C, that is a huge amount. Yet cabal can't handle what even mildly complicated make scripts can do.
Cabal is not flexible. I decide I want to include a nice graph of code motion in jhc, so I add the following 2 lines to my makefile
%.pdf: %.dot dot $< -Tpdf -o$@
and done! my build system now understands how to create pdf documents from graph description files. now, I _could_ add support for this to cabal, I _could_ wait several months for the new version to propagate to users. And I _would_ fully expect to have to go through the whole process again the next time I want to do something slightly unorthodox.
Cabal is just a huge dependency I don't need. Every dependency I add to a project is a bigger hassle for my users and for me. A fairly complicated dependency like cabal would have to have fairly compelling benefits.
Your arguments make it sound as though providing an option for building with cabal is out of the question. Since I'm not invovled with JHC or LHC in any way I don't know how you would answer this question: Would you consider a cabal based build inaddition to the autotools one? Personally, I look at it this way. Both build systems have different advantages that the other cannot provide but they are not mutually exclusive. Also, the effort to keep them both working for the respective groups of users is rather small in practice.
Now, I am saying these things so people don't think I am just being stubborn. I have valid, compelling, and logical reasons to not want to use cabal. I think it is the wrong tool for the job and it is that simple. If you want me to switch to cabal, address its issues, and then _in addition_ add a killer feature on top to put it ahead of other systems to make the work involved in switching worth it. I have a goal with jhc, to write a kick ass haskell compiler. Not to fight a build system that is not suited to my task and that made some dubious design decisions. Not to promote an agenda.
The reason to provide a .cabal file is exactly the one Don wrote about. This is possible both using make as the build system or in a way that is independent of the make based build system.
And before you respond, think about this. What if the ghc developers were constantly bombarded with whining from the perl people that ghc doesn't use MakeMaker.pm since ghc uses perl in the evil demangler? What would your impression of the perl community be?
I don't recall if I've expressed this publicly before or not, but I'm not fond of the language specific reimplementations of make. I think it's silly that every language has 2-3 language specific build systems and package formats. But, it's too late for me to stop Cabal from existing. Hackage is too useful to ignore. Using it increases my productivity. Tools that use the Cabal format save me time and give me cool features for free. I can easily run haddock or module graphs for example. So, in short, if the perl community had a compelling argument based on what GHC is missing out on, then I think it would be fine for them to bring that to the attention of GHC HQ. Now, the next point. I think you're getting carried away here. This fork was created without you being aware of it. That makes me think the author of the fork didn't bombard you with whining. So, I think we need to keep some perspective on that. It's natural that you should have a fair bit of emotional attachment to the JHC -- you'd be weird if you didn't -- but as I've said before, I don't think any of this is an attack on you or JHC. Rather I think it's a fondness for JHC plus a desire to try different things.
What if people kept trying to convince _you_ to rewrite your haskell project in java _and_ provide support for it because "they never had to use referential transparency, so it can't be that important to you".
When this comes up on the Darcs mailing list we tend to explain why we like Haskell and then encourage them to try a reimplementation in their favorite language if they want to. I think that parallels the situation here. Someone thought he could do better by doing X and Y and instead of expecting you to support that went off and made a fork where he can support that.
Sometimes that is what it feels like, which is disapointing from this community. We all came to haskell because we thought it was the better choice at some point. The hegemony was pushing java, C++, or worse but we didn't bite (or at least were still hungry). Just because something is popular, it doesn't mean it is good, just because it is written in haskell, it doesn't mean it is elegant. So don't begrudge me for holding out for something better. Perhaps franchise will be it, perhaps some future version of cabal, perhaps nothing will replace make/autoconfs sweet spot (though I would hope there is still some innovation in this area the OSS community can explore).
Well, I came to Haskell because I had university courses that introduced me to it, then I started using Darcs and realized I wanted to contribute to Darcs so I learned more Haskell. In the process, my old favorite language Common Lisp was replaced by my new favorite Haskell. I tell people that I now prefer Haskell because: a) The static guarantees you can get really are useful b) the language and paradigm fit the way I think c) the community is full of exteremely intelligent people that care about the quality of the code they write d) the community is friendly and growing. I think animonsity towards Java and C++ is silly. Those languages have their merit, their strengths and their place. I don't see how using Cabal for it's strengths today and make for its strengths prevents you from using the next cool thing when it gets here.
I hope John doesn't take the fork as any sort of aggressive or insulting action. He's made a compiler that is sufficiently interesting to have users that want to take over.
I am still actively working on jhc for the record. Actual code checkin tends to be very spurty, but don't think the project is dead. More in a design phase than anything else. There is no surer way to instigate another spurt than by submitting some patches or bringing up discussion of an interesting topic or paper on the jhc mailing list.
That's good to hear. Thanks for your time, and I wish both projects well with lots of collaboration! Jason

On Fri, Nov 28, 2008 at 08:51:45PM -0800, Jason Dagit wrote:
Personally, I look at it this way. Both build systems have different advantages that the other cannot provide but they are not mutually exclusive.
I don't see any advantage in Cabal, except that a .cabal file provides some metadata and dependency information that can help the build.
Also, the effort to keep them both working for the respective groups of users is rather small in practice.
At least in ghc, the mixture of make and Cabal was a huge failure. Ciao, Kili

kili:
On Fri, Nov 28, 2008 at 08:51:45PM -0800, Jason Dagit wrote:
Personally, I look at it this way. Both build systems have different advantages that the other cannot provide but they are not mutually exclusive.
I don't see any advantage in Cabal, except that a .cabal file provides some metadata and dependency information that can help the build.
And we have tools to automate the packaging of cabal-specified code. So for example, there are already native packages of LHC, but not of JHC. http://aur.archlinux.org/packages.php?ID=21749 Because of the automatic packaging for cabal-specified software. -- Don

On Sat, Nov 29, 2008 at 11:31:52AM -0800, Don Stewart wrote:
I don't see any advantage in Cabal, except that a .cabal file provides some metadata and dependency information that can help the build.
And we have tools to automate the packaging of cabal-specified code. So for example, there are already native packages of LHC, but not of JHC.
http://aur.archlinux.org/packages.php?ID=21749
Because of the automatic packaging for cabal-specified software.
Oh, maybe I'll write similar tools for OpenBSD ports some day (when I've enough time). Yet I consider this (useful) configuration and dependency information metadata. IMHO, Cabal is nice for specifying this metadata, it maybe nice wrt hackage, some people even may like to just cabal-install something and go ahead, but this are already too much tasks it's trying to fulfill, leaving alone Cabal as a *build* tool. I'm probably biased, because I had so much trouble with ghc and the Cabal/make maze, so I may be a little bit unfair (because ghc's requirements are more complicated than just an ordinary program or library). However, I really believe in the unix philosophy (one tool for one task), and Cabal clearly doesn't follow it. There's an example for a tool capable of dependency and (to a certain degree) configuration management in the java world: ivy (http://ant.apache.org/ivy/). Well, they added a lot of bload since they moved to apache, and of course one could question why everything has to be XML, but the basic idea was: deal with dependencies, add support for repositories containing dependencies in several versions, but nothing more. No build tool, no packaging or install tool. Yes, it's tightly coupled with apache-ant, but if you have an ivy-file, there's nothing stopping you from converting the information contained in this file to some includable makefile snippet. I didn't have a very close look at the ghc-new-build-system yet, but I think the idea here is basically the same: use the .cabal files (and the Cabal library) to generate files that then are used by make(1) to do the real work. I hope that I make at least a little bit sense. The problem I've with Cabal is that it tries to be the swiss army knife of dependency and configuration management, building, packaging and installation, but like those swiss army knifes on steroids (with too much features), it doesn't fit in you pocket any longer. Ciao, Kili

When people say "Cabal" they often mean two different things (or both).
Cabal-the-library (package "Cabal") knows how to use standard Haskell
build tools but it is not a very flexible or easily extensible build
tool (it relies on ghc --make for one, don't know how it does it for
other compilers). Cabal was originally intended to be extensible via
Haskell, but this has its disadvantages (such as API breakage).
However, the .cabal file format seems to be very useful for packaging
purposes.
Cabal-the-install-tool (package "cabal-install") is actually a
different program that sits on top of Cabal-the-library, and it is in
fact what really provides the real advantages. Together with Hackage
this is what provides the killer feature of "cabal install foo",
however it relies on the building features and meta-data of Cabal.
FWIW, here's a small SLOC breakdown of Cabal-the-library (about 1 month ago):
7087 Distribution.Simple (the build system)
2127 Distribution (mostly parse utils and data types)
1304 Distribution.PackageDescription (similar)
284 Distribution.Compat (so that it remains haskell98-compatible)
136 Language (definition of language extensions)
The problematic part here is the first one, but it is needed if you
don't want to rely on tools like 'make' to be installed (which the
packaging aspect of Cabal-the-library and Cabal-the-install-tool) rely
on.
(FYI, here're the numbers for GNU make:
SLOC Directory SLOC-by-Language (Sorted)
21197 top_dir ansic=21157,sh=40
4121 config sh=4121
1658 glob ansic=1658
1247 w32 ansic=1247
1075 tests perl=1049,sh=26
16 po sed=16
0 doc (none)
Totals grouped by language (dominant language first):
ansic: 24062 (82.08%)
sh: 4187 (14.28%)
perl: 1049 (3.58%)
sed: 16 (0.05%)
So that's over 20000 SLOC, but, of course, for a more powerful tool.
So I presume the 4x more code remark by John was about the Makefile
rules to implement something similar to the Simple build system part.)
2008/11/29 Matthias Kilian
On Sat, Nov 29, 2008 at 11:31:52AM -0800, Don Stewart wrote:
I don't see any advantage in Cabal, except that a .cabal file provides some metadata and dependency information that can help the build.
And we have tools to automate the packaging of cabal-specified code. So for example, there are already native packages of LHC, but not of JHC.
http://aur.archlinux.org/packages.php?ID=21749
Because of the automatic packaging for cabal-specified software.
Oh, maybe I'll write similar tools for OpenBSD ports some day (when I've enough time). Yet I consider this (useful) configuration and dependency information metadata. IMHO, Cabal is nice for specifying this metadata, it maybe nice wrt hackage, some people even may like to just cabal-install something and go ahead, but this are already too much tasks it's trying to fulfill, leaving alone Cabal as a *build* tool.
I'm probably biased, because I had so much trouble with ghc and the Cabal/make maze, so I may be a little bit unfair (because ghc's requirements are more complicated than just an ordinary program or library). However, I really believe in the unix philosophy (one tool for one task), and Cabal clearly doesn't follow it.
There's an example for a tool capable of dependency and (to a certain degree) configuration management in the java world: ivy (http://ant.apache.org/ivy/). Well, they added a lot of bload since they moved to apache, and of course one could question why everything has to be XML, but the basic idea was: deal with dependencies, add support for repositories containing dependencies in several versions, but nothing more. No build tool, no packaging or install tool. Yes, it's tightly coupled with apache-ant, but if you have an ivy-file, there's nothing stopping you from converting the information contained in this file to some includable makefile snippet.
I didn't have a very close look at the ghc-new-build-system yet, but I think the idea here is basically the same: use the .cabal files (and the Cabal library) to generate files that then are used by make(1) to do the real work.
I hope that I make at least a little bit sense. The problem I've with Cabal is that it tries to be the swiss army knife of dependency and configuration management, building, packaging and installation, but like those swiss army knifes on steroids (with too much features), it doesn't fit in you pocket any longer.
Ciao, Kili _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Push the envelope. Watch it bend.

On Sun, Nov 30, 2008 at 01:37:20AM +0000, Thomas Schilling wrote:
So that's over 20000 SLOC, but, of course, for a more powerful tool. So I presume the 4x more code remark by John was about the Makefile rules to implement something similar to the Simple build system part.)
No, I was refering to 'mk'. which you can download a unix port of here: http://cminusminus.org/code.html I chose it since there are a variety of make implementations out there, and GNU make attempts to be compatible with all of them and then some. 'mk' contains a superset of the functionality (with a slightly cleaned up syntax) of what we all generally think of as portable 'make'. John -- John Meacham - ⑆repetae.net⑆john⑈

On Sat, 2008-11-29 at 17:49 -0800, John Meacham wrote:
On Sun, Nov 30, 2008 at 01:37:20AM +0000, Thomas Schilling wrote:
So that's over 20000 SLOC, but, of course, for a more powerful tool. So I presume the 4x more code remark by John was about the Makefile rules to implement something similar to the Simple build system part.)
No, I was refering to 'mk'. which you can download a unix port of here:
http://cminusminus.org/code.html
I chose it since there are a variety of make implementations out there, and GNU make attempts to be compatible with all of them and then some. 'mk' contains a superset of the functionality (with a slightly cleaned up syntax) of what we all generally think of as portable 'make'.
There's hardly any code in Cabal that corresponds to mk/make, since we call out to ghc --make. Most of the code in Cabal is for lots of other features. Of course if Cabal's "Simple" build system did contain more code for a dependency framework like mk or make then it'd be a lot more capable. However building a decent one is easier said than done. We don't just want to re-create make, we want something better. Duncan

Thomas Schilling wrote:
Cabal-the-install-tool (package "cabal-install") is actually a different program that sits on top of Cabal-the-library, and it is in fact what really provides the real advantages. Together with Hackage this is what provides the killer feature of "cabal install foo", however it relies on the building features and meta-data of Cabal.
As I understand it, that's also a seperate download. (Whereas the cabal library comes with GHC.) One day, if I feel hard-core enough, I might try this tool. (Assuming it works on Windows...) It sounds potentially useful. (Although most actual packages typically have one, maybe two dependencies that aren't already installed, if that.)

andrewcoppin:
Thomas Schilling wrote:
Cabal-the-install-tool (package "cabal-install") is actually a different program that sits on top of Cabal-the-library, and it is in fact what really provides the real advantages. Together with Hackage this is what provides the killer feature of "cabal install foo", however it relies on the building features and meta-data of Cabal.
As I understand it, that's also a seperate download. (Whereas the cabal library comes with GHC.)
One day, if I feel hard-core enough, I might try this tool. (Assuming it works on Windows...) It sounds potentially useful. (Although most actual packages typically have one, maybe two dependencies that aren't already installed, if that.)
*if* .. *might* .. *assuming* .. *potentially* .. *maybe* .. *if*.. You could have built it by now! Source: http://hackage.haskell.org/packages/archive/cabal-install/0.6.0/cabal-instal... Dependencies that aren't in core: http://hackage.haskell.org/packages/archive/HTTP/3001.1.5/HTTP-3001.1.5.tar.... http://hackage.haskell.org/packages/archive/zlib/0.5.0.0/zlib-0.5.0.0.tar.gz Note the last one assumes you have zlib the C library installed. This should be straight forward to obtain. Enjoy. -- Don

Am Sonntag, 30. November 2008 20:46 schrieb Don Stewart:
andrewcoppin:
Thomas Schilling wrote:
Cabal-the-install-tool (package "cabal-install") is actually a different program that sits on top of Cabal-the-library, and it is in fact what really provides the real advantages. Together with Hackage this is what provides the killer feature of "cabal install foo", however it relies on the building features and meta-data of Cabal.
As I understand it, that's also a seperate download. (Whereas the cabal library comes with GHC.)
One day, if I feel hard-core enough, I might try this tool. (Assuming it works on Windows...) It sounds potentially useful. (Although most actual packages typically have one, maybe two dependencies that aren't already installed, if that.)
*if* .. *might* .. *assuming* .. *potentially* .. *maybe* .. *if*..
You could have built it by now!
Source:
http://hackage.haskell.org/packages/archive/cabal-install/0.6.0/cabal-insta ll-0.6.0.tar.gz
Dependencies that aren't in core:
http://hackage.haskell.org/packages/archive/HTTP/3001.1.5/HTTP-3001.1.5.tar .gz http://hackage.haskell.org/packages/archive/zlib/0.5.0.0/zlib-0.5.0.0.tar.g z
Note the last one assumes you have zlib the C library installed. This should be straight forward to obtain.
Not even necessary, it comes with its own for windows: if !os(windows) -- Normally we use the the standard system zlib: extra-libraries: z else -- However for the benefit of users of Windows (which does not have zlib -- by default) we bundle a complete copy of the C sources of zlib-1.2.3 c-sources: cbits/adler32.c cbits/compress.c cbits/crc32.c cbits/deflate.c cbits/gzio.c cbits/infback.c cbits/inffast.c cbits/inflate.c cbits/inftrees.c cbits/trees.c cbits/uncompr.c cbits/zutil.c
Enjoy.
-- Don

daniel.is.fischer:
Am Sonntag, 30. November 2008 20:46 schrieb Don Stewart:
andrewcoppin:
Thomas Schilling wrote:
Cabal-the-install-tool (package "cabal-install") is actually a different program that sits on top of Cabal-the-library, and it is in fact what really provides the real advantages. Together with Hackage this is what provides the killer feature of "cabal install foo", however it relies on the building features and meta-data of Cabal.
As I understand it, that's also a seperate download. (Whereas the cabal library comes with GHC.)
One day, if I feel hard-core enough, I might try this tool. (Assuming it works on Windows...) It sounds potentially useful. (Although most actual packages typically have one, maybe two dependencies that aren't already installed, if that.)
*if* .. *might* .. *assuming* .. *potentially* .. *maybe* .. *if*..
You could have built it by now!
Source:
http://hackage.haskell.org/packages/archive/cabal-install/0.6.0/cabal-insta ll-0.6.0.tar.gz
Dependencies that aren't in core:
http://hackage.haskell.org/packages/archive/HTTP/3001.1.5/HTTP-3001.1.5.tar .gz http://hackage.haskell.org/packages/archive/zlib/0.5.0.0/zlib-0.5.0.0.tar.g z
Note the last one assumes you have zlib the C library installed. This should be straight forward to obtain.
Not even necessary, it comes with its own for windows: if !os(windows) -- Normally we use the the standard system zlib: extra-libraries: z else -- However for the benefit of users of Windows (which does not have zlib -- by default) we bundle a complete copy of the C sources of zlib-1.2.3 c-sources: cbits/adler32.c cbits/compress.c cbits/crc32.c cbits/deflate.c cbits/gzio.c cbits/infback.c cbits/inffast.c cbits/inflate.c cbits/inftrees.c cbits/trees.c cbits/uncompr.c cbits/zutil.c
Even easier. Now there's no excuse for the ifs and buts and maybes. -- Don

On Sun, 30 Nov 2008, Don Stewart wrote:
*if* .. *might* .. *assuming* .. *potentially* .. *maybe* .. *if*..
You could have built it by now!
Source: http://hackage.haskell.org/packages/archive/cabal-install/0.6.0/cabal-instal...
Dependencies that aren't in core: http://hackage.haskell.org/packages/archive/HTTP/3001.1.5/HTTP-3001.1.5.tar.... http://hackage.haskell.org/packages/archive/zlib/0.5.0.0/zlib-0.5.0.0.tar.gz
Note the last one assumes you have zlib the C library installed. This should be straight forward to obtain.
I have extended this description and put it to http://haskell.org/haskellwiki/Cabal-Install Maybe you like to add a pointer in cabal-install.cabal/Homepage field to this page.

On Sun, 2008-11-30 at 10:57 +0000, Andrew Coppin wrote:
As I understand it, that's also a seperate download. (Whereas the cabal library comes with GHC.)
One day, if I feel hard-core enough, I might try this tool. (Assuming it works on Windows...) It sounds potentially useful.
It will of course be bundled with the first release of the Haskell Platform. In the mean time you can get a pre-compiled binary here: http://haskell.org/~duncan/cabal/cabal.exe
(Although most actual packages typically have one, maybe two dependencies that aren't already installed, if that.)
My favourite example at the moment is the new hackage server which has 24 dependencies and installs nicely using cabal install. Duncan

duncan.coutts:
On Sun, 2008-11-30 at 10:57 +0000, Andrew Coppin wrote:
As I understand it, that's also a seperate download. (Whereas the cabal library comes with GHC.)
One day, if I feel hard-core enough, I might try this tool. (Assuming it works on Windows...) It sounds potentially useful.
It will of course be bundled with the first release of the Haskell Platform. In the mean time you can get a pre-compiled binary here:
http://haskell.org/~duncan/cabal/cabal.exe
(Although most actual packages typically have one, maybe two dependencies that aren't already installed, if that.)
My favourite example at the moment is the new hackage server which has 24 dependencies and installs nicely using cabal install.
I'm a fan of gitit, and its 46 dependencies, that install via cabal-install. Pretty awesome. -- Don

On Fri, Nov 28, 2008 at 08:51:45PM -0800, Jason Dagit wrote:
On Fri, Nov 28, 2008 at 7:30 PM, John Meacham
wrote: It is important to me that jhc be as widely accessible at possible. The number of machines './configure && make install' will work on outnumbers those that cabal install will work on hundreds or thousands to one. I am pleased to have anyone experiment with jhc in the first place, I don't want to make things harder for my users. This alone would be enough of a reason all other things being equal, but other things arn't equal to boot.
The command './configure && make install' only works in Windows if the user bother to install some form of unix environment emulation like msys or cygwin. I don't know if windows platform support matters to jhc, but if it does that's one reason to want to provide an alternative to the autotools build option.
This always seemed like a rather weak argument, first of all, it's not all that tricky to make autotools builds work on windows. Also, Windows users by far prefer binary distributions anyway. They are downloading the msi's rather than the source code. People who are actively developing a project generally have a more advanced toolchain anyway. Not that an easier windows build isn't useful, but that slightly easier windows build is outweighed by the much more complicated build system dependencies that are paid everywhere.
Your arguments make it sound as though providing an option for building with cabal is out of the question. Since I'm not invovled with JHC or LHC in any way I don't know how you would answer this question: Would you consider a cabal based build inaddition to the autotools one?
Personally, I look at it this way. Both build systems have different advantages that the other cannot provide but they are not mutually exclusive. Also, the effort to keep them both working for the respective groups of users is rather small in practice.
This is sort of like splitting the baby, I don't think the effort is really that small. A build system is a fairly complicated piece of code, and it is also one of the parts I want more than anything to 'just work' and having to worry about two different systems would not be productive. A dumbed down build that is the intersection of both systems would be barely usable and a drain on development effort. I never was opposed to a cabal 'target' for jhc. I have 'make dist' 'make dist-rpm' and hopefully 'make msi' soon, adding a 'make dist-hackage' alongside is not a bad thing, however, it is if it complicates the standard build or comes to dominate development effort or can't be done without duplication of functionality. Cabal is not entirely conducive to being used in this way at the moment, but this can be improved on. Some of the issues arn't too hard and perhaps are being worked on, like adding a 'hackage release' field, and a separate 'hackage' and 'project' maintainer fields. Others are trickier, like the requirement to conform to hackages version numbering policy that might differ from the native one. workarounds are possible of course. But again, this is work. Even if the code isn't that much, it does place a support burden on me and other jhc developers, so it isn't something I'd do on a whim and without a clean design that does not introduce any cabal dependencies on the standard build or require ongoing support that is more than minimal, in fact, the only time cabal should be invoked is specifically the case of installing via cabal-install.
And before you respond, think about this. What if the ghc developers were constantly bombarded with whining from the perl people that ghc doesn't use MakeMaker.pm since ghc uses perl in the evil demangler? What would your impression of the perl community be?
I don't recall if I've expressed this publicly before or not, but I'm not fond of the language specific reimplementations of make. I think it's silly that every language has 2-3 language specific build systems and package formats. But, it's too late for me to stop Cabal from existing.
I totally agree.
Hackage is too useful to ignore. Using it increases my productivity. Tools that use the Cabal format save me time and give me cool features for free. I can easily run haddock or module graphs for example. So, in short, if the perl community had a compelling argument based on what GHC is missing out on, then I think it would be fine for them to bring that to the attention of GHC HQ.
Now, the next point. I think you're getting carried away here. This fork was created without you being aware of it. That makes me think the author of the fork didn't bombard you with whining. So, I think we need to keep some perspective on that. It's natural that you should have a fair bit of emotional attachment to the JHC -- you'd be weird if you didn't -- but as I've said before, I don't think any of this is an attack on you or JHC. Rather I think it's a fondness for JHC plus a desire to try different things.
Yeah, I should say that this wasn't really directed at Lemmih and the other lhc authors. There actually was some discussion between me and him, a fork was mentioned, I did not know it was followed through on though until I saw this thread. To reiterate, Lemmih has made some great contributions to jhc, I fully support diversity in projects, so welcome the new effort. As long as the codebases are still compatible I expect patches to flow both directions. However, the issues I raised with cabal are real ones that concern me. Not just when it relates to jhc, but to the future of the language as a whole. There have been a number of projects I have been involved with where things did get as bad as I implied above. If the only reason for the fork was cabal, then that is disapointing. But I don't think that is entirely the case. John -- John Meacham - ⑆repetae.net⑆john⑈

John Meacham wrote:
I never was opposed to a cabal 'target' for jhc. I have 'make dist' 'make dist-rpm' and hopefully 'make msi' soon, adding a 'make dist-hackage' alongside is not a bad thing, however, it is if it complicates the standard build or comes to dominate development effort or can't be done without duplication of functionality.
My understanding is that you can have a .cabal file which merely specifies the dependency information and metadata, but delegates all the actual building to your existing configure and make infrastructure. It could then be entirely ignored (by someone who chose to type ./configure && make) but it would still work for someone who wanted to use cabal (using the metadata to get any dependencies, and then thereafter using the make-based build). Is this not a good path for a project like JHC? Jules

John Meacham wrote:
I never was opposed to a cabal 'target' for jhc. I have 'make dist' 'make dist-rpm' and hopefully 'make msi' soon, adding a 'make dist-hackage' alongside is not a bad thing, however, it is if it complicates the standard build or comes to dominate development effort or can't be done without duplication of functionality.
My understanding is that you can have a .cabal file which merely specifies the dependency information and metadata, but delegates all the actual building to your existing configure and make infrastructure. It could then be entirely ignored (by someone who chose to type ./configure && make) but it would still work for someone who wanted to use cabal (using the metadata to get any dependencies, and then thereafter using the make-based build). Is this not a good path for a project like JHC? Jules

On Fri, 2008-11-28 at 19:30 -0800, John Meacham wrote:
On Wed, Nov 26, 2008 at 07:20:12PM -0800, Jason Dagit wrote:
I spoke with the author of the fork a bit in IRC around the time it happened and my understanding is that: 1) John sternly objects to using cabal as the build system for JHC
This is a fairly silly reason to fork a project, especially jhc, for a number of reasons.
In general John is right in most of his criticisms of Cabal. As someone who works on Cabal I am well aware of the problems in its design and implementation. I happen to think that most of the problems can be fixed but it would be silly to suggest that the balance of advantages to disadvantages goes in its favour for every project. The advantages at the moment are greatest for small projects and are in a large part due to network effects. The major problems are in the configuration and build components. The configuration language is not quite expressive enough and the current configuration search implementation does not take advantage of the full expressiveness of the existing language. The build component obviously should be based on a dependency system, as make is, rather than IO () and ghc --make. There are lots of things we don't do well because of this. As I said, I think all these things are fixable. But it is a lot of work. We're currently limited by the amount of developer time we have. So I would like to encourage people to get involved. There are elegant solutions to the problems. It's actually quite fun working on the new bits and helping to drain the IO () swamp.
It is important to me that jhc be as widely accessible at possible. The number of machines './configure && make install' will work on outnumbers those that cabal install will work on hundreds or thousands to one.
I've sometimes wondered why nobody has made a generic configure.ac and makefile that wraps the Cabal build procedure. It seems pretty straightforward and it might help lower barriers for some users, especially, as John mentions, potential users from outside the community. Duncan

duncan.coutts:
It is important to me that jhc be as widely accessible at possible. The number of machines './configure && make install' will work on outnumbers those that cabal install will work on hundreds or thousands to one.
I've sometimes wondered why nobody has made a generic configure.ac and makefile that wraps the Cabal build procedure. It seems pretty straightforward and it might help lower barriers for some users, especially, as John mentions, potential users from outside the community.
Yes. Reuse. That's why we moved to Cabal in the first place - to avoid reimplementing Makefiles, .hi rules, .o rules, and ld linking arguments once per Haskell library, which wasn't scalable in the slightest. We could come back, and have the 'make && make install' fans wrap up cabal with a generic wrapper for people who like to type 'make'. -- Don
participants (24)
-
Adrian Neumann
-
Andrew Coppin
-
Austin Seipp
-
Bernie Pope
-
Brandon S. Allbery KF8NH
-
Bulat Ziganshin
-
Daniel Fischer
-
David Menendez
-
Don Stewart
-
Donnie Jones
-
Duncan Coutts
-
Eric Kow
-
Gwern Branwen
-
Henning Thielemann
-
Jake McArthur
-
Jason Dagit
-
John Meacham
-
Josef Svenningsson
-
Jules Bean
-
mail@justinbogner.com
-
Matthias Kilian
-
Richard O'Keefe
-
Ross Paterson
-
Thomas Schilling