
Hi all, Anyone here familiar with the Clean programming language? http://www.cs.ru.nl/~clean/ It looks /very/ similar to Haskell, both in functionality and syntax. I would be grateful for any sort of comparison. I'm trying to decide which language I should try to learn. Cheers, Daniel.

Daniel Carrera wrote:
Hi all,
Anyone here familiar with the Clean programming language?
It looks /very/ similar to Haskell, both in functionality and syntax.
I would be grateful for any sort of comparison. I'm trying to decide which language I should try to learn.
I sent something to this newsgroup more than two years ago, suggesting that such a topic SHOULD be in the FAQs... More or less: 0. They *ARE* very similar, both being lazy, pure functional languages, and the possible application domains are strongly overlapping. I strongly suggest to everybody who wants to specialize in the FP that he/she learn both. YES! It won't harm you, it will give you a bit larger perspective. 1. There is a visible difference in general philosophy. - Haskell started as an exploration language, undergoes frequent (even if small) modifications, its "maintenance" is distributed, there are at least 3 (OK, 4 if you wish) major implementations, and the documentation (even if driven by one super-mind) is the result of a community consensus. - Clean - in its actual instance - was thought as an industrial-strength programming platform, stable, and changing only in the case of necessity. (I mean - when the authors feel really unhappy about the status quo, and they have nothing else to do, which is rather rare...) There is a cross-breeeding between them, but on quite different levels. While the authors of Clean profit from time to time from Haskell formal constructions (notably the superficial appearance of the class system), some Haskell-oriented people have been fascinated by the powerful and coherent (even if difficult) Clean interfacing library. Haskell produces/inspires from time to time some offsprings (and - with my full sincere respect - some bastards): generic Haskell, Polyp, Cayenne, etc., some really wonderful headache generators, strongly recommended for all free and ambitious minds. Clean is more application-oriented, and has a good reputation for being really fast. We could have read here and elsewhere some critical voices: that it would be better to concentrate the human effort on one language, instead of producing two << faux jumeaux >>, if you know what I mean... I disagree violently with this, and I think that the co-existence, and the competition between them is good, inspiring, and offering better support for those who think already on new languages. 2. Thus, there are some design differences. Haskell had since the beginning the arbitrary-precision Integers, and numeric computations were for too long considered as something of less importance. This resulted in somewhat unsatisfactory status of numerically-oriented parts of the standard library, but it is improving. Clean in these contexts is more brutal, and seems to optimize better some programs which in Haskell need some attention of the user (such as avoiding to use Integers, where Ints suffice; Clean has only "standard" Ints). Also - for the current implementations - the manipulation of arrays is more efficient in Clean. (But I haven't done any benchmarking with the last GHC version...) Clean has quite efficient head-strict, unboxed, but spine-lazy lists, which is *very good* for signal processing, especially together with an aggressive strictness analysis. Haskell is more difficult to optimize here. The type inference is a bit different, and in general the type systems *are* different. Haskell system is easier to learn. The status, the relation between classes and types is a nice, baroque construction. Clean class system is - apparently - more "plain", the relation between classes and overloaded functions is more straightforward, but if you look at some details, it is substantially more complicated, and sometimes disturbing. In Haskell a function which takes two arguments and produces a result of type c has the type a -> b -> c. In Clean you may write it as a b -> c which *will* disturb a Haskeller, since it might be understood as a one- argument function with a being some constructor. Clean shortcuts produce the effect that the types of f x y = z x y and f x = \y -> z x y are different, while in Haskell it is the same. Some missing or "redundant" parentheses in the type specification in Clean are far from being inocuous, *never* try to write the above as (a b) -> c ... Clean type categoriess are quite rich. Uniqueness, strictness, type dependence [a=>b] etc. - all this must be studied for many hours in order to be able to understand some compiler messages and/or to code in the most efficient way.. 3. Some people say "Haskell uses Monads", Clean - uniqueness types for IO and elsewhere. Uniqueness type for a variable means that that this variable can be accessed only in one place within the program. So, if there a new object is produced from the transformation of the original, and if this object is also "unique", the program can update in place the original, yielding the new instance. This cannot be done directly in Haskell. In fact, there are people who use Monads in Clean as well. The update- in-place-; single-thread data processing chains are simply implemented in Clean at a lower level, while IO Monad in Haskell is primitive. So, Clean is more universal, permitting to implement more "imperative" things by the user, while Haskell is "cleaner"... Clean programs possessing imperative flavour could be quite messy, but Clean disposes of a syntactic contraption, a kind of sequential "let" with specific scope rules, permitting to REUSE names: # (aFile,aFileSystem) = openItInside aFileSystem # aFile = WriteToItOrDoSomethingElseWith aFile # aFile = continueToProcess aFile ... where you don't need to write aNewFile = process aFile, etc. A file is still unique, because of these sequential scoping rules. Haskell on the other hand has the "do" block, permitting to write monadic chains in an imperative style do var <- doSomething ... encore <- doSomethingElseWith var printSomewhere encore ... return (whatYou want et encore) 4. A general observation about the use of both languages by a beginner who wants to learn on examples: Haskell disposes of two decent interactive interpreters: Hugs and GHCI permitting to test directly small expression, to ask about their types, etc. which is often useful for debugging, and for grasping the essentials on small exercises. Clean is a compiler which processes the whole program. It has a very decent interactive interface with make/project options, and plenty of information produced on the screen by the compiler, but - as in my case - the class usage is a bit less appropriate. (There are other views as well, I am not trying to depreciate Clean). It must be admitted though that neither is really an *interactive* language like Scheme or Python. In a pure, strongly typed programming framework the compiler performs a lot of global analysis, which makes the incremental programming difficult. (In Clean: impossible; in Hugs: awkward - you can input expressions, but not definitions; in GHCI: possible: let foo = someThing useThisFoo foo ... but this is a monadic construction in disguise... 5. So, a personal touch. When I worked on the definition of abstract vectors and tensors in Hilbert space, in order to implement *quantum structures*, I used Haskell. When I simulated musical instruments using some numeric dataflow circuits, I used Clean. I am satisfied, and I love both. =================== Now, for the people who asked those questions. Choose whatever you wish, if your philosophy is "I have no time to learn two languages, only one, so tell me which one is better", then I have some VERY STRONG statements: * Whoever says to you "H is *better* that C" (or vice-versa) is LYING. * If you want to learn FP, you should have a good view on the principal paradigms, not on particular syntactic constructs. Then, some knowledge of two different languages is more than helpful. * Learning of languages is a big adventure and pleasure. * Here and elsewhere both H and C communities are helpful, people who know, answer all questions without pretensions nor suggestions that they are respectful gurus bothered by beginners (which happens too often on other "language-oriented" newgroups I visit from time to time...). Jerzy Karczmarczuk Caen, France

Hi Jerzy, Thank you for your thorough response. I will archive it and come back to it as a reference. As I learn more about FP features (e.g. Monads) I'll be able to get more from your description. But just a quick note:
4. A general observation about the use of both languages by a beginner who wants to learn on examples: Haskell disposes of two decent interactive interpreters: Hugs and GHCI permitting to test directly small expression, to ask about their types, etc. which is often useful for debugging, and for grasping the essentials on small exercises.
For me, right now, this makes a world of a difference. I'm experimenting with small exercises on Hugs right now.
It must be admitted though that neither is really an *interactive*
Yeah, I noticed... ;-) But I recently learned about a neat workaround: I have a file called Test.hs and load it with ':l Test'. Then I start exprimenting. I make a small change and type ':l Test' again. So it's almost interactive, and not excessively akward. I think I'll learn a bit more using Haskell, and then I'll be in a better position to decide where to continue (probably with Haskell at first).
Now, for the people who asked those questions. Choose whatever you wish, if your philosophy is "I have no time to learn two languages, only one, so tell me which one is better", then I have some VERY STRONG statements:
* Whoever says to you "H is *better* that C" (or vice-versa) is LYING.
:-) Well, I merely ask for general impressions and go from there.
* If you want to learn FP, you should have a good view on the principal paradigms, not on particular syntactic constructs. Then, some knowledge of two different languages is more than helpful.
Ok. Good to know.
* Learning of languages is a big adventure and pleasure.
That's why I'm here. :-) I have a strong preference for languages with clear, simple models. For example, I like C better than C++, and Ruby better than Python. Even if something might take fewer lines in C++ than in C, or be faster in Python than in Ruby, I like the feeling that I understand what I'm doing. And I see elegance in a simple model with few exceptions.
* Here and elsewhere both H and C communities are helpful, people who know, answer all questions without pretensions nor suggestions that they are respectful gurus bothered by beginners (which happens too often on other "language-oriented" newgroups I visit from time to time...).
Indeed. I've been pleasantly surprised by how friendly this group has been. I've learned a lot already, and now I have a lot of resources to continue my exploration of Haskell and FP. Incidentally, the Ruby community is friendly too. :-) Cheers, Daniel.

At 7:40 AM -0400 2005/5/4, Daniel Carrera wrote:
[...]
I have a file called Test.hs and load it with ':l Test'. Then I start exprimenting. I make a small change and type ':l Test' again. So it's almost interactive, and not excessively akward.
You can get the same result by typing just ':r' (means "reload"). Have fun with it, --Ham -- ------------------------------------------------------------------ Hamilton Richards, PhD Department of Computer Sciences Senior Lecturer The University of Texas at Austin 512-471-9525 1 University Station C0500 Taylor Hall 5.138 Austin, Texas 78712-0233 ham@cs.utexas.edu hrichrds@swbell.net http://www.cs.utexas.edu/users/ham/richards ------------------------------------------------------------------

Daniel, If it is syntactical simplicity that you like you might want to learn Scheme as an introduction to FP. I'm no expert on either Scheme or Haskell, but we all have to agree it is an elegant language. I'm currently teaching myself the two in parallel, and I find that scheme is sort of the C of FP, in the sense that it doesn't try to be too fancy in what it gives you. No fancy type system there, but you can build just about anything with it. There's also some good teaching material for learning it. The book Structure and Interperetation of Computer Programs for one. And the lectures at: http://swiss.csail.mit.edu/classes/6.001/abelson-sussman-lectures/ By the authors of the book, who incedently were some of the creators of the language. Scheme is strict, so it lacks some of the flexibility (and drawbacks) that come from Laziness, but in the book they teach you how to build a Lazy version of Scheme, which is instructive in understanding what's really going on in Lazy evaluation. Anyway, I'll stop now. Cheers, Bryce On Wed, 4 May 2005, Daniel Carrera wrote:
I have a strong preference for languages with clear, simple models. For example, I like C better than C++, and Ruby better than Python. Even if something might take fewer lines in C++ than in C, or be faster in Python than in Ruby, I like the feeling that I understand what I'm doing. And I see elegance in a simple model with few exceptions.

Bryce Bockman writes:
If it is syntactical simplicity that you like you might want to learn Scheme as an introduction to FP. I'm no expert on either Scheme or Haskell, but we all have to agree it is an elegant language. I'm currently teaching myself the two in parallel, and I find that scheme is sort of the C of FP, in the sense that it doesn't try to be too fancy in what it gives you. No fancy type system there, but you can build just about anything with it.
I would rather not compare Scheme to "C". C is a fixed-syntax language, the "lack of fanciness" is *rigidity*. Scheme is infinitely extensible, don't say that its *syntax* is simple just because you have its "Cambridge-Polish" notation, parenthesed/prefixed. Just look at the syntax of DO, of classes, units, etc. in DrScheme, just try to imagine the power of a *general* macro-expander, very far from cpp...
Scheme is strict, so it lacks some of the flexibility (and drawbacks) that come from Laziness, but in the book they teach you how to build a Lazy version of Scheme, which is instructive in understanding what's really going on in Lazy evaluation.
Don't confuse categories please. SICP doesn't say how to make a lazy variant of Scheme. Applicative protocol is not normal protocol, the reduction is, as it is. On the other hand, it is relatively easy to make lazy constructs, streams based on explicit, user-controllable thunks, since you can of course construct dynamically functional objects. This does not necessarily tell you what is the *real* implementation of laziness in Haskell, and even less in Clean; "manual thunks" are possibly different from a specific graph reduction strategy implemented by a lazy language compiler. You will learn something anyway, but perhaps something different. Jerzy Karczmarczuk

On Wednesday 04 May 2005 22:22, karczma@info.unicaen.fr wrote:
Bryce Bockman writes:
Scheme is strict, so it lacks some of the flexibility (and drawbacks) that come from Laziness, but in the book they teach you how to build a Lazy version of Scheme, which is instructive in understanding what's really going on in Lazy evaluation.
Don't confuse categories please. SICP doesn't say how to make a lazy variant of Scheme. Applicative protocol is not normal protocol, the reduction is, as it is.
We may have a different copy of SICP, but in mine (2nd edition) there is Chapter 4.2 "Variantions on a Scheme -- Lazy Evaluation" and in particular 4.2.2 "An Interpreter with Lazy Evaluation". Ben

From: Benjamin Franksen
Date: Wed, 4 May 2005 22:47:21 +0200 On Wednesday 04 May 2005 22:22, karczma@info.unicaen.fr wrote:
Bryce Bockman writes:
Scheme is strict, so it lacks some of the flexibility (and drawbacks) that come from Laziness, but in the book they teach you how to build a Lazy version of Scheme, which is instructive in understanding what's really going on in Lazy evaluation.
Don't confuse categories please. SICP doesn't say how to make a lazy variant of Scheme. Applicative protocol is not normal protocol, the reduction is, as it is.
We may have a different copy of SICP, but in mine (2nd edition) there is Chapter 4.2 "Variantions on a Scheme -- Lazy Evaluation" and in particular 4.2.2 "An Interpreter with Lazy Evaluation".
Ben
To be completely accurate: the evaluation order is Scheme is strict, not lazy, forever and ever, amen. That doesn't change. What SICP shows you how to do in chapter 4 (brilliantly, I think) is how to write a "metacircular evaluator" which is a Scheme interpreter written in Scheme itself. Of course, because you have a Scheme interpreter running on top of another Scheme interpreter, the (outer) interpreter is going to be pretty slow, but the point of the chapter is not to build a useful interpreter but to really understand how interpreters work. Once you understand that, they show that it's relatively easy to build a different kind of Scheme interpreter, one that uses lazy evaluation instead of strict evaluation. That's not "real Scheme" by any means, but it can be used to do real computations. Check out http://mitpress.mit.edu/sicp for the whole story. We now return you to your regularly-scheduled language... Mike

On Wed, 4 May 2005, Benjamin Franksen wrote:
We may have a different copy of SICP, but in mine (2nd edition) there is Chapter 4.2 "Variantions on a Scheme -- Lazy Evaluation" and in particular 4.2.2 "An Interpreter with Lazy Evaluation".
Here's the direct link: http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-27.html#%_sec_4.2.2 jacob

Benjamin Franksen writes:
karczma@info.unicaen.fr wrote:
Bryce Bockman writes:
Don't confuse categories please. SICP doesn't say how to make a lazy variant of Scheme. Applicative protocol is not normal protocol, the reduction is, as it is.
We may have a different copy of SICP, but in mine (2nd edition) there is Chapter 4.2 "Variantions on a Scheme -- Lazy Evaluation" and in particular 4.2.2 "An Interpreter with Lazy Evaluation".
Absolutely right, and BTW., I had http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-27.html#%_sec_4.2 on the screen when I wrote what I wrote. Michael Vanier explained well my aim (better than myself, an optional sad smiley here...). I wanted just to say that a lazy interpreter etc., *is not Scheme*. Well, AS say: "In this section we will implement a normal-order language that is the same as Scheme except that compound procedures are non-strict in each argument. Primitive procedures will still be strict." We read, and we see that the lazy layer is a superficial one, with 'forcing' implemented at the surface, so for me it was enough to remark that I consider it to be a different language. Sorry for the hair-splitting manners, I didn't want to annoy anybody. -- On the other hand, it would be an interesting pedagogical initiative to take such language as Scheme, but instead of making a "Scheme variant", "metacircular" etc. in it, to try to implement a genuine lazy graph reduction strategy, as in Clean. Or implement a kind of STG Haskell machine. Regards. Jerzy Karczmarczuk

On Wednesday 04 May 2005 23:24, karczma@info.unicaen.fr wrote:
Benjamin Franksen writes:
karczma@info.unicaen.fr wrote:
Bryce Bockman writes:
Don't confuse categories please. SICP doesn't say how to make a lazy variant of Scheme. Applicative protocol is not normal protocol, the reduction is, as it is.
We may have a different copy of SICP, but in mine (2nd edition) there is Chapter 4.2 "Variantions on a Scheme -- Lazy Evaluation" and in particular 4.2.2 "An Interpreter with Lazy Evaluation".
Absolutely right, and BTW., I had http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-27.html#%_sec_4.2 on the screen when I wrote what I wrote. Michael Vanier explained well my aim (better than myself, an optional sad smiley here...).
I wanted just to say that a lazy interpreter etc., *is not Scheme*. Well, AS say: "In this section we will implement a normal-order language that is the same as Scheme except that compound procedures are non-strict in each argument. Primitive procedures will still be strict." We read, and we see that the lazy layer is a superficial one, with 'forcing' implemented at the surface, so for me it was enough to remark that I consider it to be a different language.
Ok, I think I see now what you mean. Ben

I was trying to draw an analogy between imperitive and functional language development over time. In both cases we seem to have a progression towards More complicated type systems etc. That was really my only point. To say C is to Imperative languages as Scheme is to functional languages does not say that C is as expressive as any functional language. Of course, I should have known that such a comparison would be disturbing to those on this list. On Wed, 4 May 2005 karczma@info.unicaen.fr wrote:
Bryce Bockman writes: I would rather not compare Scheme to "C". C is a fixed-syntax language, the "lack of fanciness" is *rigidity*. Scheme is infinitely extensible, don't say that its *syntax* is simple just because you have its "Cambridge-Polish" notation, parenthesed/prefixed. Just look at the syntax of DO, of classes, units, etc. in DrScheme, just try to imagine the power of a *general* macro-expander, very far from cpp...
Scheme is strict, so it lacks some of the flexibility (and drawbacks) that come from Laziness, but in the book they teach you how to build a Lazy version of Scheme, which is instructive in understanding what's really going on in Lazy evaluation.
I'm confused by you're sentence:
Applicative protocol is not normal protocol, the reduction is, as it is.
Are you saying that Lazy is not the same as normal order evaluation? My point was that in SICP in addition to the Applicative Order interpereter. They also show how one could go about building a Normal Order version. Are Normal Order evaluation and Laziness totally different? Again I'm just learning here.
On the other hand, it is relatively easy to make lazy constructs, streams based on explicit, user-controllable thunks, since you can of course construct dynamically functional objects. This does not necessarily tell you what is the *real* implementation of laziness in Haskell, and even less in Clean; "manual thunks" are possibly different from a specific graph reduction strategy implemented by a lazy language compiler. You will learn something anyway, but perhaps something different.
Okay. That much is clear. My next question would be is there a SICP level text that could teach one how to build a lazy (compiler/interpereter) in the way that Haskell does it? Thanks, Bryce

On Wed, May 04, 2005 at 05:11:37AM -0400, Daniel Carrera wrote:
Hi all,
Anyone here familiar with the Clean programming language?
It looks /very/ similar to Haskell, both in functionality and syntax.
I would be grateful for any sort of comparison. I'm trying to decide which language I should try to learn.
Take a look at the dissertation of Matthew Naylor, the author of hacle, a Haskell to Clean translator. It contains a detailed comparison of similarities and differences. In the past there were many discussions about differences in convenience, performance. You should be able to find them searching the web and newsgroup articles. Personally, I recommend starting with Haskell (but of course I am biased ;) but also taking a look at Clean. It has some nice features, like uniqueness typing, built-in support for dynamics (AFAIK only on windows), IDE, a proof assistant, etc. Best regards Tomasz
participants (9)
-
Benjamin Franksen
-
Bryce Bockman
-
Daniel Carrera
-
Hamilton Richards
-
Jacob Nelson
-
Jerzy Karczmarczuk
-
karczma@info.unicaen.fr
-
Michael Vanier
-
Tomasz Zielonka