let and fixed point operator

Hi, I find the feature that the construct "let x = f x in expr" assigns fixed point of f to x annoying. The reason is that I can not simply chain mofifications a variable like e.g. this: f x = let x = x * scale in let x = x + transform in g x When one is lucky then it results in a compile error; in worse cases it results in stack overflow in runtime. The annoying part is figuring out new and new variable names for essentially the same thing to avoid the search/evaluation of the fixed point. I suppose Haskell was designed so that it makes sense. The only usage I can see is like this: let fact = \x -> if x == 0 then 1 else x * fact (x-1) in ... but that is not any shorter than: let fact x = if x == 0 then 1 else x * fact (x-1) in So the question is what am I missing? Any nice use cases where fixed point search is so good that it is worth the trouble with figuring out new and new variable names for essentially the same stuff? Peter.

On 8/30/07, Peter Hercek
f x = let x = x * scale in let x = x + transform in g x
Why are you trying to call three different things by the same name 'x' in one tiny block of code? That's very confusing and makes it hard to reason equationally about the code. -- Dan

1 f x = 2 let x = x * scale in 3 g x Hmmm ... just assume that the scope of the x on line 3 (which hides the x from the higher level scope is extended from line 3 to the beginning part of line 2 (from line start to the equal sign). OCAML does it. "Let before" in Clean does it too. Does not sound bad to me either. So this sounds to me like weak argument compared to disadvantages. There should be something else (I'm missing) there too... Thanks, Peter. Dan Piponi wrote:
On 8/30/07, Peter Hercek
wrote: f x = let x = x * scale in let x = x + transform in g x
Why are you trying to call three different things by the same name 'x' in one tiny block of code? That's very confusing and makes it hard to reason equationally about the code. -- Dan

On 8/30/07, Peter Hercek
Hi,
I find the feature that the construct "let x = f x in expr" assigns fixed point of f to x annoying. The reason is that I can not simply chain mofifications a variable like e.g. this:
f x = let x = x * scale in let x = x + transform in g x
When one is lucky then it results in a compile error; in worse cases it results in stack overflow in runtime. The annoying part is figuring out new and new variable names for essentially the same thing to avoid the search/evaluation of the fixed point.
I suppose Haskell was designed so that it makes sense. The only usage I can see is like this:
let fact = \x -> if x == 0 then 1 else x * fact (x-1) in
... but that is not any shorter than:
let fact x = if x == 0 then 1 else x * fact (x-1) in
So the question is what am I missing? Any nice use cases where fixed point search is so good that it is worth the trouble with figuring out new and new variable names for essentially the same stuff?
Peter.
This is not really about fix points, it is about the very essence of functional programming. You cannot "modify" variables in the way you are suggesting; a variable such as x must *always refer to the same thing* within a given scope. This is not a liability, but rather a very nice thing: it makes it much easier to reason about programs if a given name always refers to the same thing. In an imperative language, where you really can modify the contents of variables, you do not have this guarantee. The same variable could refer to different values at different points in the program, which can lead to much confusion. Now, I do understand your annoyance; it certainly is annoying to type something like f x = let y = x * scale in let z = y + transform in g z where you have to come up with a bunch of different names for the intermediate values. But it's actually possible to do this in a much nicer way which is idiomatic in a functional language such as Haskell. Note that what you are really doing here is sending x through a "pipeline" of functions which transform it into another value. The way to combine functions into a pipeline is by using function concatenation: f = g . (+ transform) . (* scale) This is exactly the same thing, but no annoying intermediate names in sight! This simply says that f is the function you get when you first multiply by scale, then add transform, then finally apply function g. If you don't like the "point-free" style, you could also write something like f x = g $ (+ transform) $ (* scale) $ x (The $ simply lets you avoid writing lots of parentheses.) Hope this helps, -Brent

On Thu, 2007-08-30 at 18:17 +0200, Peter Hercek wrote:
I find the feature that the construct "let x = f x in expr" assigns fixed point of f to x annoying.
Any alternative? Non-recursive assignments?
f x = let x = x * scale in let x = x + transform in g x
I think it is often it is better to avoid temporary names. I guess this is a simplified example, but I think it is better to write: f x = g (transform + scale * x) Or even use point-free style to avoid fixpoint? f = g . (+transform) . (* scale) -k

What is so bad about f x = g x'' where x'' = x' + transform x' = x * scale (if you really hate inventing temporary names, that is).

What is so bad about
f x = g x'' where x'' = x' + transform x' = x * scale
(if you really hate inventing temporary names, that is). There's nothing at all wrong with this, assuming it's what you meant to type :-), and it might even correspond perfectly to the mathematical notation used in some textbook. But I would argue that this example is
ok wrote: pretty simple, and that if there were a lot of xs and x's and x''s then the chance of making a typing mistake is greater, I believe, than if you had used x, xscaled, and xtransformed. (On the other hand this is all pretty subjective... :-) -Paul

Paul Hudak wrote:
What is so bad about
f x = g x'' where x'' = x' + transform x' = x * scale
(if you really hate inventing temporary names, that is). There's nothing at all wrong with this, assuming it's what you meant to type :-), and it might even correspond perfectly to the mathematical notation used in some textbook. But I would argue that
ok wrote: this example is pretty simple, and that if there were a lot of xs and x's and x''s then the chance of making a typing mistake is greater, I believe, than if you had used x, xscaled, and xtransformed. (On the other hand this is all pretty subjective... :-)
OMG! "Mathematical" and "subjective" in the same sentence! ;-) Personally, I find that if I've got more than 2 of the thing, I number them rather than attach multiple primes... but that's just me.

On this topic, I'm just now teaching myself Haskell and am running into a
whole range of stylistic questions like this. My big bad Java habits tend
towards long camelCase function names, which I'm trying to wean myself off.
But the variable conventions are the real issue. As far as I can tell, if
you're writing type descriptors you just use [a,b,c...]. But what are the
other general patterns for metasyntactic variable use?
In particular for a function -- n, m, etc or x, y, etc? What about for f'
defined in a let block of f? If I use x y at the top level I need to use
another set below -- is that where x' y' are more appropriate, or x1, y1?
For tuples I tend to pattern match with (a,b), and for lists I tend to use
(h:r) for head and rest. Are there other, more universal standards for these
sorts of things? Another related question is whether using these short sweet
variable names makes sense, or whether I should try to use more descriptive
ones. Obviously these are pretty small points of style, but I'm just trying
to nail the Haskell idiom as closely as possible.
I've already found myself falling into namespace traps though, playing
around with math functions & etc. where an adequately descriptive name for
one context in maybe a let block steps on something which deserves the name
equally well in the global namespace. I try to keep the global namespace
pretty clean, but things keep popping up -- I'm thinking maybe another
convention would come to the rescue here, like camelCase or hyphen-ated for
global and all lowers for local?
--S.
On 8/31/07, Paul Hudak
What is so bad about
f x = g x'' where x'' = x' + transform x' = x * scale
(if you really hate inventing temporary names, that is). There's nothing at all wrong with this, assuming it's what you meant to type :-), and it might even correspond perfectly to the mathematical notation used in some textbook. But I would argue that this example is
ok wrote: pretty simple, and that if there were a lot of xs and x's and x''s then the chance of making a typing mistake is greater, I believe, than if you had used x, xscaled, and xtransformed. (On the other hand this is all pretty subjective... :-)
-Paul
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Aug 31, 2007, at 16:01 , Sterling Clover wrote:
In particular for a function -- n, m, etc or x, y, etc? What about for f' defined in a let block of f? If I use x y at the top level I need to use another set below -- is that where x' y' are more appropriate, or x1, y1?
Usual style is x',y'. For longer names, camelCase is the usual convention but some libraries which basically import everything from C via the FFI use C_style_names. Imported constants/macros which are uppercase with _ tend to be mapped to tHIS_KIND_OF_NAME (see for example the Win32 package). One thing to watch out for is that monads tend to carry their own metaconventions: a generic monad is "m", a reader monad is "r", a state monad is "s", functors are "f".
For tuples I tend to pattern match with (a,b), and for lists I tend to use (h:r) for head and rest. Are there
The common convention for lists is e.g. (x:xs) (the latter is "x-es").
other, more universal standards for these sorts of things? Another related question is whether using these short sweet variable names makes sense, or whether I should try to use more descriptive ones.
I generally use something short but descriptive when writing something specific, and single-character generic names when writing something that's generic and/or polymorphic. -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

On Fri, 31 Aug 2007, Brandon S. Allbery KF8NH wrote:
On Aug 31, 2007, at 16:01 , Sterling Clover wrote:
In particular for a function -- n, m, etc or x, y, etc? What about for f' defined in a let block of f? If I use x y at the top level I need to use another set below -- is that where x' y' are more appropriate, or x1, y1?
Usual style is x',y'.
This seems to be a matter of taste. x1, y1 are certainly ok, too.
For longer names, camelCase is the usual convention but some libraries which basically import everything from C via the FFI use C_style_names. Imported constants/macros which are uppercase with _ tend to be mapped to tHIS_KIND_OF_NAME (see for example the Win32 package).
Is this considered an accident or a feature? I'd prefer to convert these identifiers to thisKindOfName. There is also a collection of articles about style: http://www.haskell.org/haskellwiki/Category:Style

On Thu, 2007-08-30 at 18:17 +0200, Peter Hercek wrote:
Hi,
I find the feature that the construct "let x = f x in expr" assigns fixed point of f to x annoying. The reason is that I can not simply chain mofifications a variable like e.g. this:
f x = let x = x * scale in let x = x + transform in g x
The common answer is that such code is considered ugly in most circumstances. Nevertheless, one solution would be to use the Identity monad and write that as, f x = runIdentity $ do x <- x*scale x <- x + transform return (g x)
When one is lucky then it results in a compile error; in worse cases it results in stack overflow in runtime. The annoying part is figuring out new and new variable names for essentially the same thing to avoid the search/evaluation of the fixed point.
I suppose Haskell was designed so that it makes sense. The only usage I can see is like this:
let fact = \x -> if x == 0 then 1 else x * fact (x-1) in
... but that is not any shorter than:
let fact x = if x == 0 then 1 else x * fact (x-1) in
So the question is what am I missing? Any nice use cases where fixed point search is so good that it is worth the trouble with figuring out new and new variable names for essentially the same stuff?
Peter.
Haskell is lazy, we can have (mutually) recursive values. The canonical example, fibs = 0:1:zipWith (+) fibs (tail fibs) Slightly more interesting, karplusStrong = y where y = map (\x -> 1-2*x) (take 50 (randoms (mkStdGen 1))) ++ zipWith (\x y -> (x+y)/2) y (tail y) However, the real point is that you shouldn't be naming and renaming the "same" thing. Going back to your original example, it would be nicer to most to write it as, f = g . transform displacement . scale factor or pointfully f x = g (transform displacement (scale factor x)) with the appropriate combinators.

Derek Elkins wrote:
On Thu, 2007-08-30 at 18:17 +0200, Peter Hercek wrote:
Hi,
I find the feature that the construct "let x = f x in expr" assigns fixed point of f to x annoying. The reason is that I can not simply chain mofifications a variable like e.g. this:
f x = let x = x * scale in let x = x + transform in g x
The common answer is that such code is considered ugly in most circumstances. Nevertheless, one solution would be to use the Identity monad and write that as, f x = runIdentity $ do x <- x*scale x <- x + transform return (g x)
This is nice but more complicated. The goal should be to have it as simple as possible.
Haskell is lazy, we can have (mutually) recursive values. The canonical example, fibs = 0:1:zipWith (+) fibs (tail fibs) Slightly more interesting, karplusStrong = y where y = map (\x -> 1-2*x) (take 50 (randoms (mkStdGen 1))) ++ zipWith (\x y -> (x+y)/2) y (tail y)
This is very nice argument! Thanks. I actually used it myself, but did not realize it when I was looking for the pro/contra arguments. This with the fact that it is not that good style to use the same name for intermediate results might be worth it.
However, the real point is that you shouldn't be naming and renaming the "same" thing. Going back to your original example, it would be nicer to most to write it as, f = g . transform displacement . scale factor or pointfully f x = g (transform displacement (scale factor x)) with the appropriate combinators.
Essentially the same idea as the one from Brent Yorgey. Works fine till the operations can fill easily on one line. Then it does not scale that well since when it needs to be on more lines it interferes with automatic insertion of curly braces and semicolons by the layout rules (which are influenced by the context). Of course when there are more transformations it makes sense to name the intermediate results differently, but even few transformations may not fit easily when identifier names are long. Thanks, Peter.

Peter Hercek wrote:
f = g . transform displacement . scale factor or pointfully f x = g (transform displacement (scale factor x)) with the appropriate combinators.
Essentially the same idea as the one from Brent Yorgey. Works fine till the operations can fill easily on one line. Then it does not scale that well since when it needs to be on more lines it interferes with automatic insertion of curly braces and semicolons by the layout rules (which are influenced by the context). Of course when there are more transformations it makes sense to name the intermediate results differently, but even few transformations may not fit easily when identifier names are long.
I have no idea what you're talking about. It works fine on multiple lines: f x = g . transform displacement . scale factor $ x is perfectly valid. Tastes may vary, you might prefer trailing .s to leading .s, but the idea is the same. Jules

Jules Bean wrote:
I have no idea what you're talking about. It works fine on multiple lines:
f x = g . transform displacement . scale factor $ x
is perfectly valid.
Yes, it is. It is not an issue if you prefer to indent based on previous line instead of "always by the same small amount of spaces". And then problem happens when the amount of spaces is less than 5. E.g. this does not work: h x = x f x = let g x = h . (+ 5) . (* 2) $ x in g x I do not like to indent based on previous lines since it should be re-indented when the previous lines change (e.g. because of identifier rename). So I indent based on previous lines only when it adds a LOT to readability. Of course fixed amount indentation by big enough number of spaces would do but then your code gets too wide if more blocks are nested and I like to be withing 110 chars at worst (preferably 80). This is probably not a problem for "where" keyword then the next longest from the "let", "where", "do", "of" set is "let" which would mean that 5 should be enough, not that bad, but looks much for me. Other option is to leave "let" keyword alone on the line and then indent "g x" by one and the function body by two indentation units from the "let" keyword. So from my point of view the point free style is great if it fits on one line well ... otherwise it depends. Thanks, Peter.

Peter Hercek wrote:
Jules Bean wrote:
I have no idea what you're talking about. It works fine on multiple lines:
f x = g . transform displacement . scale factor $ x
is perfectly valid.
Yes, it is. It is not an issue if you prefer to indent based on previous line instead of "always by the same small amount of spaces". And then problem happens when the amount of spaces is less than 5. E.g. this does not work:
h x = x f x = let g x = h . (+ 5) . (* 2) $ x in g x
Indeed, no, but this does: h x = x f x = let {g x = h . (+ 5) . (* 2) $ x} in g x In other words, if you don't like the layout conventions of layout style, then don't use layout style.
I do not like to indent based on previous lines since it should be re-indented when the previous lines change (e.g. because of identifier rename).
I can't say that's ever bothered me very much, but I agree it is a minor annoyance. It is made fairly painless by a decent editor.
So I indent based on previous lines only when it adds a LOT to readability. Of course fixed amount indentation by big enough number of spaces would do but then your code gets too wide if more blocks are nested and I like to be withing 110 chars at worst (preferably 80).
I certainly try to keep my lines narrow, (normally under 80) but I've never found any trouble doing that. I don't tend to nest very deeply.
This is probably not a problem for "where" keyword then the next longest from the "let", "where", "do", "of" set is "let" which would mean that 5 should be enough, not that bad, but looks much for me. Other option is to leave "let" keyword alone on the line and then indent "g x" by one and the function body by two indentation units from the "let" keyword. So from my point of view the point free style is great if it fits on one line well ... otherwise it depends.
I think thinking of 'indentation units' is a bit harmful in layout code, for roughly the reasons you describe. I would suggest either use layout (and indent at the places that layout suggests you should), or don't use layout (use {} like C or Perl). Jules

OK, so it's only tangentally related, but... do you have *any idea* how many times I've written something like let x = (some complex function of x) in (some other complex function of x) when in fact what I *meant* to do was type x' instead of x?! It's really maddening to write 50,000 lines of code, eventually get it to compile, run it, and have the program lock up and start consuming so much virtual memory that the entire PC becomes unstable within seconds. (This isn't helped by the fact that Ctrl+C doesn't seem to make either GHCi or GHC-compiled programs halt...) Now you have 50,000 lines of otherwise untested code, and there's a bug within it *somewhere*... good luck. Obviously you might very well have *meant* to write x = f x. But would it be possible to add some kind of optional compiler warning to find such assignments? It can be a nightmare trying to track down where you made the mistake...

It's really maddening to write 50,000 lines of code, eventually get it to compile, run it, and have the program lock up and start consuming so much virtual memory that the entire PC becomes unstable within seconds.
(This isn't helped by the fact that Ctrl+C doesn't seem to make either
GHCi or GHC-compiled programs halt...) Now you have 50,000 lines of otherwise untested code, and there's a bug within it *somewhere*... good luck.
Well, this is why you should test your program in bits and pieces before you get to that point. Writing 50,000 LOC before you even run your first test is a horrible idea in any programming language. -Brent

Brent Yorgey quotes:
It's really maddening to write 50,000 lines of code, eventually get it to compile, run it, and have the program lock up and start consuming so much virtual memory that the entire PC becomes unstable within seconds. ...
Well, this is why you should test your program in bits and pieces before you get to that point. Writing 50,000 LOC before you even run your first test is a horrible idea in any programming language.
I would rephrase this in a more brutal way. Writing 50000 lines of code in a language which seems to be badly mastered is a suicidary exercice. The let x=f x construct touches the essence of Haskell, its laziness, and it is used as a co-recursive way to replace loops. If it appears as the effect of forgetting the prime in x', use variables with long, meaningful names. This will economize some frustration. == An anecdote. Hundreds of years ago, when I taught programming in Cracow, Poland, we had some students from Vietnam (North, of course). One of them wrote programs where *all* variable names were ... you guess it, Vietnamese. It was easy to remember for him, no errors, no confusion. The only touchy point in this affair was that my group counted also three Vietnamese girls, who always when the boy with his poker-face produced publicly his solution, became red and began to giggle, or shouted angrily something I couldn't understand. Had I noted or memorized those programs, I would probably learn a good collection of particularly succulent Vietnamese swearwords. == Perhaps you should decorate your program a bit as well? Jerzy Karczmarczuk

Another interesting example of the x = f x use : coins = [1,2,5,10,20,50,100,200] beautiful = foldl (\without p -> let (poor,rich) = splitAt p without with = poor ++ zipWith (++) (map (map (p:)) with) rich in with ) ([[]] : repeat []) I don't remember who wrote this code (I rewrote it from memory since it impressed me quite a bit), but it's a very fast and beautiful (in my eyes at least) solution to the "menu" problem : (beautiful coins !! 200) would give you all the set of coins you could use to pay for 200, in less than 40ms on my computer... But, even more trivial... You use this all the time when you define recursive function, you know ? You would need to add a "rec" keyword to the language if you disallowed this. Myself I'm a big fan of the point-free style (to a limit) and find that it scale very well indeed when you begin to name the combination of functions you want to use. -- Jedaï

Chaddaï Fouché wrote:
But, even more trivial... You use this all the time when you define recursive function, you know ? You would need to add a "rec" keyword to the language if you disallowed this.
Great and new reason too. Trying to make a difference based on presence of formal argument would be bad since point-free is useful too. Well, in my opinion, only to a certain degree since from some point on it is more understandable to name intermediate results (points). Thanks, Peter.

Brent Yorgey wrote:
It's really maddening to write 50,000 lines of code, eventually get it to compile, run it, and have the program lock up and start consuming so much virtual memory that the entire PC becomes unstable within seconds.
(This isn't helped by the fact that Ctrl+C doesn't seem to make either GHCi or GHC-compiled programs halt...) Now you have 50,000 lines of otherwise untested code, and there's a bug within it *somewhere*... good luck.
Well, this is why you should test your program in bits and pieces before you get to that point. Writing 50,000 LOC before you even run your first test is a horrible idea in any programming language.
Horrible? Yes. Avoidable? Not always, sadly... (NB. 50,000 is an exaggeration. I've never written a program that large in my entire life in any programming language I've ever used.) The problem is that, depending on the program, sometimes you have to write quite a lot of infrastructure before you get to the point where there's anything finished enough to test. Obviously it's better to avoid that happening, but that's easier said then done!

On Thu, Aug 30, 2007 at 06:16:12PM +0100, Andrew Coppin wrote:
Obviously you might very well have *meant* to write x = f x. But would it be possible to add some kind of optional compiler warning to find such assignments? It can be a nightmare trying to track down where you made the mistake...
If you enable -Wall, ghc will warn you about this, provided that x was already bound in this context. -- David Roundy http://www.darcs.net

David Roundy wrote:
On Thu, Aug 30, 2007 at 06:16:12PM +0100, Andrew Coppin wrote:
Obviously you might very well have *meant* to write x = f x. But would it be possible to add some kind of optional compiler warning to find such assignments? It can be a nightmare trying to track down where you made the mistake...
If you enable -Wall, ghc will warn you about this, provided that x was already bound in this context.
Most excellent. GHC saves the day again...

On 8/30/07, Andrew Coppin
Obviously you might very well have *meant* to write x = f x. But would it be possible to add some kind of optional compiler warning to find such assignments?
The thing that convinced me to learn Haskell in the first place was the fact that you could write x = f x. Equations where you refer to the same variable on the left and right hand sides are the bread of butter and mathematics, and I was really pleased to find a programming language that let me do the same. So to me the idea of having a warning for this is a bit like putting a sign on bottled water saying "Warning: Contents may be wet". But that's just me. :-) Still, it might be useful to for the compiler to warn when a newly introduced name shadows another one. -- Dan

Dan Piponi wrote:
On 8/30/07, Andrew Coppin
wrote: Obviously you might very well have *meant* to write x = f x. But would it be possible to add some kind of optional compiler warning to find such assignments?
The thing that convinced me to learn Haskell in the first place was the fact that you could write x = f x. Equations where you refer to the same variable on the left and right hand sides are the bread of butter and mathematics, and I was really pleased to find a programming language that let me do the same.
Yeah, but... programs aren't like mathematics. I know people claim that they are, but they aren't. In mathematics, if you write "x = f y" you mean that these two expressions are equal. In Haskell, if you say "x = f y" you mean *make* then equal! (Let us not even go into the times when expressions like "z = f z" actually means "z[n+1] = f [z]"...)
So to me the idea of having a warning for this is a bit like putting a sign on bottled water saying "Warning: Contents may be wet". But that's just me. :-)
Well, it's definitely a valid thing to want to do, which is why I asked for a *warning* not an error. ;-) Still, this seems to be an extremely common way for me to hurt myself, so...
Still, it might be useful to for the compiler to warn when a newly introduced name shadows another one.
...or that...

On 8/30/07, Andrew Coppin
Yeah, but... programs aren't like mathematics. I know people claim that they are, but they aren't.
But the raison d'etre of Haskell is to make programming more like mathematics. That motivates everything from the fact that it's a declarative language, and the support for equational reasoning, to the fact that IO happens in a monad, and the option to use primes on variables names.
In mathematics, if you write "x = f y" you mean that these two expressions are equal. In Haskell, if you say "x = f y" you mean *make* then equal!
Haskell is a declarative language, not an imperative language. When you write "x = f x" in Haskell, you're declaring to the compiler that x equals f x. In an imperative language like Java, the line x = f(x) gives the compiler the imperative to emit instructions to store the value of f(x) in a 'box' called x. In Haskell, there is no box. (When you get down to the nuts and bolts, a Haskell compiler and a Java compiler may ultimately actually do the same thing here, but the way you think about a language is as important as what instructions the code generator emits.) -- Dan

Dan Piponi writes:
In mathematics, if you write "x = f y" you mean that these two expressions are equal. In Haskell, if you say "x = f y" you mean *make* then equal!
Haskell is a declarative language, not an imperative language. When you write "x = f x" in Haskell, you're declaring to the compiler that x equals f x. In an imperative language like Java, the line x = f(x) gives the compiler the imperative to emit instructions to store the value of f(x) in a 'box' called x. In Haskell, there is no box.
Well, there are boxes... But there also thunks and latent, yet-unevaluated graphs... Anyway, I believe strongly that ALL people who have problems with the Haskell protocole, and they are numerous, I teach a good sample of them, should be encouraged to learn Prolog. IN DEPTH, and I mean it, Andrew Coppin and Peter Hercek ! In Prolog A=B is the unification, which is a bit more than equality, and something much more aggressive than an assignment. When you REALLY understand unification, it will be easier to see the lazy instantiation of the Haskell assignment, and, additionally, it becomes much more easy to understand the automatic inference of types, which sooner or later must be harnessed by all Haskell programmers... The best. Jerzy Karczmarczuk

On Thu, 2007-08-30 at 23:58 +0200, jerzy.karczmarczuk@info.unicaen.fr wrote:
Dan Piponi writes:
In mathematics, if you write "x = f y" you mean that these two expressions are equal. In Haskell, if you say "x = f y" you mean *make* then equal!
Haskell is a declarative language, not an imperative language. When you write "x = f x" in Haskell, you're declaring to the compiler that x equals f x. In an imperative language like Java, the line x = f(x) gives the compiler the imperative to emit instructions to store the value of f(x) in a 'box' called x. In Haskell, there is no box.
Well, there are boxes... But there also thunks and latent, yet-unevaluated graphs...
Anyway, I believe strongly that ALL people who have problems with the Haskell protocole, and they are numerous, I teach a good sample of them, should be encouraged to learn Prolog. IN DEPTH, and I mean it, Andrew Coppin and Peter Hercek !
In Prolog A=B is the unification, which is a bit more than equality, and something much more aggressive than an assignment. When you REALLY understand unification, it will be easier to see the lazy instantiation of the Haskell assignment, and, additionally, it becomes much more easy to understand the automatic inference of types, which sooner or later must be harnessed by all Haskell programmers...
One should learn Prolog anyway.

jerzy.karczmarczuk@info.unicaen.fr skrev:
Anyway, I believe strongly that ALL people who have problems with the Haskell protocole, and they are numerous, I teach a good sample of them, should be encouraged to learn Prolog. IN DEPTH,
Do you have a recommendation on how to do this? (e.g., books, web-pages, (available) lecture notes, problem sets) / johan

Yes, I know, this is Haskell list. So, I apologize, but not too much... Johan Grönqvist cites me:
Anyway, I believe strongly that ALL people who have problems... should be encouraged to learn Prolog. IN DEPTH,
Do you have a recommendation on how to do this? (e.g., books, web-pages, (available) lecture notes, problem sets)
First, install a decent Prolog on your machine. There are plenty: http://kti.mff.cuni.cz/~bartak/prolog/implementations.html http://www.cs.cmu.edu/Groups/AI/html/faqs/lang/prolog/prg/part2/faq-doc-2.ht ml You may wish to install the free version of Visual Prolog. My favourite is the SWI Prolog: http://www.swi-prolog.org/ (University of Amsterdam) The documentation is complete, and the reference manual contains references to such standard books as ClocksinMellish, or Sterling&Shapiro. The last one is very, very instructive. Also, Prolog Programming for Artificial Intelligence, written by Bratko, is very useful. There are on the web collection of examples from this book, don't remember where. Try here: http://promethee.philo.ulg.ac.be/engdep1/download/prolog/bratko/ On-line there are thousands of examples, tutorials, etc. J. Fisher: (There is an example of cut, especially for A. Coppin) http://www.csupomona.edu/~jrfisher/www/prolog_tutorial/contents.html others: http://www.coli.uni-saarland.de/~kris/learn-prolog-now/lpnpage.php?pageid=on line http://computing.unn.ac.uk/staff/cgpb4/prologbook/book.html http://www.cs.nuim.ie/~jpower/Courses/PROLOG/ Etc., etc. Really! Examples. For example: http://www.csse.monash.edu.au/~lloyd/tildeLogic/Prolog.toy/Examples/ http://kti.mff.cuni.cz/~bartak/prolog/learning.html http://www.visual-prolog.com/vip/example/ and of course there are discussion lists, FAQs, etc. Ask Google... =================== Andrew Coppin writes:
I did once try to learn Prolog. And failed. Miserably.
I just couldn't bend my head around how the Prolog interpreter manages to > make seemingly "impossible" leaps of deduction. (It's a *machine*! How can > it deduce arbitrarily complex conclusions from any arbitrary set of axioms? That requires *intelligence*!) And yet, in other, seemingly identical cases, it utterly fails to deduce patently *obvious* results... really weird!
One of standard exercices in Prolog is the construction of the meta-interpreter of Prolog in Prolog. While this is cheating, I recommend it to you. It opens eyes. Actually, there are three basic items to learn while trying to master Prolog after having dealt with the syntactic essentials term construction, and the like. 1. The unification, which shows the "ultimate" instance of pattern-matching and is useful for recognizing some techniques for the automatic inference of types in functional languages. 2. The usage of unbound "logical variable", which sometimes permits to to do things which require laziness in Haskell. 3. The control backtracking, which is at the heart of the logical non-de- terminism. Now, the non-deterministic algorithms in Haskell are usually implemented using the *data backtracking*, or the List Monad. The control backtrack, via, say success/failure continuations, is more difficult, they are rarely taught, and problematic because of strong typing. Prolog strategies are straightforward, and I simply cannot understand the comments of Andrew Coppin. Which arbitrary set of conclusions?? Which patently obvious results not derivable?? Be kind, give some examples, otherwise people may suspect that you are issuing vacuous statements... The best. Jerzy Karczmarczuk

Cool.... I had prolog for my Spectrum, many years ago (83?), but I stopped using it when I realized it didnt have any input/output capabilities beyond print, and no way to "escape" from the prolog "bubble", eg FFI (not sure what FFI stands for, but I think it is a way for Haskell to "escape" into other libraries?). Thanks for the links, it sounds fun.

One of standard exercices in Prolog is the construction of the meta-interpreter of Prolog in Prolog. While this is cheating, I recommend it to you. It opens eyes.
Ever tried implementing Haskell in Haskell? ;-)
Prolog strategies are straightforward, and I simply cannot understand the comments of Andrew Coppin. Which arbitrary set of conclusions?? Which patently obvious results not derivable?? Be kind, give some examples, otherwise people may suspect that you are issuing vacuous statements...
Read my whole message. What I was saying (in essence) is that Prolog seemed to be performing "impossible" feats of logical deduction - until I saw a unification algorithm implemented in Haskell, and then it all made sense. (They showed an example where you basically program in a list of who is related to who, and then the computer suddenly seems to be able to magically deduce arbitrary family relationships - without any code for doing this being defined. This seemed utterly far-out to me... I'm not used to computers begin able to "think" for themselves. I'm more used to having them blindly follow whatever broken sequence of commands you feed to them... And yet, given a set of facts, this Prolog interpreter seemed to be able to magically derive arbitrarily complex conclusions from them. Double-impossible! Until I learned how it's implemented...) Having said all that, I still don't "get" what the purpose of the "cut" operator is. I also failed to understand the Prolog syntax description. (What the heck is an "atom" when it's at home? I thought an atom is a unit composed of protons and electrons...) I can certainly see why Prolog would be very useful for certain types of problems. As it happens, not the kind of problems that usually interest me. ;-)

Cut is a means of preventing backtracking beyond that point - it
prunes the potential search space saying the answer must be built on
the current set of bindings. (Lots of work went into how automatically
get "cut's" into programs to make them efficient but without the
programmer having to worry about them).
Atoms: think unique symbols that are elements in the set which is the
program's universe of discourse - one that is closed (no infinities
there please). All its doing is unifying things to see if they fit the
rules. It is one large parser that gives you back the bindings it made
on the way.
I do remember the day when a prologue researcher was fiercely
defending that prologue could compute solutions to problems that were
not achievable with "ordinary turing complete languages" - nothing as
ugly as a rampaging mob of Computer Scientists!
I've even written (late 80's) a program in prologue that performed
real-time subtitling for the deaf, which I'm told is still being used
out there.....
Would I use it now? - never - it may give you an answer but rarely
does using it give you understanding and you can always code up the
searching algorithms if you have to go that brute force. And in the
end it is the understanding that is reusable, not the answer.
Neil
On 02/09/07, Andrew Coppin
One of standard exercices in Prolog is the construction of the meta-interpreter of Prolog in Prolog. While this is cheating, I recommend it to you. It opens eyes.
Ever tried implementing Haskell in Haskell? ;-)
Prolog strategies are straightforward, and I simply cannot understand the comments of Andrew Coppin. Which arbitrary set of conclusions?? Which patently obvious results not derivable?? Be kind, give some examples, otherwise people may suspect that you are issuing vacuous statements...
Read my whole message. What I was saying (in essence) is that Prolog seemed to be performing "impossible" feats of logical deduction - until I saw a unification algorithm implemented in Haskell, and then it all made sense.
(They showed an example where you basically program in a list of who is related to who, and then the computer suddenly seems to be able to magically deduce arbitrary family relationships - without any code for doing this being defined. This seemed utterly far-out to me... I'm not used to computers begin able to "think" for themselves. I'm more used to having them blindly follow whatever broken sequence of commands you feed to them... And yet, given a set of facts, this Prolog interpreter seemed to be able to magically derive arbitrarily complex conclusions from them. Double-impossible! Until I learned how it's implemented...)
Having said all that, I still don't "get" what the purpose of the "cut" operator is. I also failed to understand the Prolog syntax description. (What the heck is an "atom" when it's at home? I thought an atom is a unit composed of protons and electrons...)
I can certainly see why Prolog would be very useful for certain types of problems. As it happens, not the kind of problems that usually interest me. ;-)
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Andrew Coppin writes:
Ever tried implementing Haskell in Haskell? ;-)
Seriously: Haskell is a *complicated* language, needing a parser, which by itself is a non-trivial exercice. Moreover, it has a type-inference engine, which may be simulated, sure, but Haskell in Haskell is a tough job. However, typing Haskell in Haskell is doable, see: http://citeseer.ist.psu.edu/424440.html The point is that Mark Jones is a well known Haskell guru, the creator of Gofer/Hugs, not an average student... Lisp/Scheme in Scheme is simpler, and is also a pretty standard exercice. Prolog is also from this perspective, very simple. Terms and clauses have the same syntax, the "program", i.e., the database of assertions can be dynamically extended, etc. A real fun.
Prolog strategies are straightforward, and I simply cannot understand the comments of Andrew Coppin. Which arbitrary set of conclusions?? Which patently obvious results not derivable?? Be kind, give some examples, otherwise people may suspect that you are issuing vacuous statements...
Read my whole message. What I was saying (in essence) is that Prolog seemed to be performing "impossible" feats of logical deduction - until I saw a unification algorithm implemented in Haskell, and then it all made sense.
(They showed an example where you basically program in a list of who is related to who, and then the computer suddenly seems to be able to magically deduce arbitrary family relationships - without any code for doing this being defined. This seemed utterly far-out to me... I'm not used to computers begin able to "think" for themselves. I'm more used to having them blindly follow whatever broken sequence of commands you feed to them... And yet, given a set of facts, this Prolog interpreter seemed to be able to magically derive arbitrarily complex conclusions from them. Double-impossible! Until I learned how it's implemented...)
My deepest congratulations. It seems that you are the first person in my pedagogical life, who got acquainted with the unification through its implementation in Haskell. Gosh, why I gave to my students the assignment to construct an unifier in a functional language, when I was absolutely sure that they understood well the unification itself?... Actually, perhaps if you tried to do it otherwise, you *would* recognize that programs in Prolog do not "think", but just constitute a database of clauses/terms, which undergo a sufficiently powerful pattern-matching. It is the only "intelligence" inside, helped by the logic protocol, different from the functional one, which ensures that a predicate in Prolog may treat *all* its parameters either as input or an output, while in Haskell this is fixed.
Having said all that, I still don't "get" what the purpose of the "cut" operator is.
Imagine that your life is a path through a tree. Depth first. At every branching you choose a branch, and you continue. When you are stuck, you remember that you have left a trail behind you, and you backtrack until the last choice point. Then you choose another path, the previous being marked as 'seen'. Well, the cut is a commitment. You cut your way back, Now, you must succeed, i.e., find a terminal node which gives you what you want, or your failure is definitive, you jump out of the system. There are hundreds of easy examples of cuts, e.g., when descending the tree consumes the ressources which cannot be restored. The cut is criticized sometimes as something "not logical", a pragmatic device. OK.
I also failed to understand the Prolog syntax description. (What the heck is an "atom" when it's at home? I thought an atom is a unit composed of protons and electrons...)
Now, you are pulling my leg. An "atom" means that you cannot split it in constituents, this name came from Lisp into other languages, but the meaning comes from Democritus, who didn't know protons. I won't say more, just that if you want to be really funny, you should not forget neutrons. http://en.wikipedia.org/wiki/Atom also: http://en.wikipedia.org/wiki/Atom_%28standard%29 http://en.wikipedia.org/wiki/Prolog also: http://en.wikipedia.org/wiki/Democritus
I can certainly see why Prolog would be very useful for certain types of problems. As it happens, not the kind of problems that usually interest me. ;-)
Well, who am I or other readers to contradict it?... My point was that learning more than one language (and moreover, not so well) gives you a better perspective to learn the ESSENCE. Let's terminate by two citations from Democritus, who invented atoms: # "Nature and instruction are similar; for instruction transforms the man."(DK 68 B 33) # "If any man listens to my opinions, here recorded, with intelligence, he will achieve many things worthy of a good man, and avoid doing many unworthy things.(DK 68 B 35) ============== Jerzy Karczmarczuk

As to whether Prolog is "dead" or not, it depends on your definition of "dead". Three years ago (not ten!) I made my living maintaining and developing a large application written in Prolog. That was actually an interesting experience, since one of the performance drivers was speed. As a result code was being perpetually tuned toward less non-determinism. You know what the limit is? Functional programming! At the time I did a little research looking for an FP language that was "ready for prime time" and for which the pain of the move for a large organization would have been acceptably low. Sadly, nothing came of it. I still think the application could have been profitably ported to a functional language. Recently I have been experimenting with ECLiPSe, a Constraint Logic Programming system embedded within standard Prolog. I found several of the problems in the Euler Project were perfect candidates for attack by constraint programming. Yes, I could have written solutions that implemented some sort of search strategy, but there is a legitimate question of leverage. For example, Sudoku puzzles are very naturally viewed as constraint programming problems. But why should I write my own solution when the Sudoku solver provided as a demo could solve 50 problems in 0.10 seconds! By the way, I could get a fairly good idea of the "how and why" of the problem nd its solution from the form of the demo code, and oh yes I had already written my own Sudoku solver on OCaml a year or so ago. To Jerzy's point -- I strongly believe that learning a language like Prolog is a good idea for two reasons -- first, it adds another tool to the programmer's toolkit, and second, it enlarges the programmer's view of ways to think about solving problems. -- Bill Wood

As to whether Prolog is "dead" or not, it depends on your definition of "dead". Three years ago (not ten!) I made my living maintaining and developing a large application written in Prolog. That was actually an interesting experience, since one of the performance drivers was speed. As a result code was being perpetually tuned toward less non-determinism. I've been following the discussion with interest, and I wonder what heppened to Gõdel, which promised to be a successor of Prolog. See the
On Sun, 2007-09-02 at 08:24 -0500, Bill Wood wrote: link for features, but http://www.cs.bris.ac.uk/~bowers/goedel.html was last updated in 1995. Does anybody know more? Hans van Thiel [snip]
To Jerzy's point -- I strongly believe that learning a language like Prolog is a good idea for two reasons -- first, it adds another tool to the programmer's toolkit, and second, it enlarges the programmer's view of ways to think about solving problems.
-- Bill Wood

G'day all.
Quoting Bill Wood
As to whether Prolog is "dead" or not, it depends on your definition of "dead". Three years ago (not ten!) I made my living maintaining and developing a large application written in Prolog.
Back when I was doing logic programming, 10 or so years ago, we used to chuckle at papers which referred to analyses which claimed to be fast "even on large 1000-line programs". I'm sure this isn't the case for you, but a typical Prolog programmer's idea of "large" is very different from a typical COBOL programmer's.
As a result code was being perpetually tuned toward less non-determinism. You know what the limit is? Functional programming!
Did you look at Mercury? Cheers, Andrew Bromage

On Mon, 2007-09-03 at 02:49 -0400, ajb@spamcop.net wrote: . . .
I'm sure this isn't the case for you, but a typical Prolog programmer's idea of "large" is very different from a typical COBOL programmer's.
Ever the diplomat? :-). Actually that is a fair observation. I don't think I ever heard a figure, but I would guess we were in the 20-50 KLOC range. The Prolog portion was solving an optimization problem. The problem was exacerbated by the fact that the code base was cloned onto around 20 different machines that were being fed by a 1-to-20 split of a stream of requests from the internet. At the back end was FFI access to a terabyte data store. Oh by the way, the operational goal was 30-second turn-around from the client end -- submit a request from a local office and have results 30 seconds later. . . .
Did you look at Mercury?
I looked seriously at Mercury. It was rejected for two managerial/political reasons. The first was that it did not appear that Mercury could support the scale of the application, at least partially because it appeared at the time that development was not very active. The second reason was that the development group had only made the transition from assembly language to Prolog within the past year or so, and the prospect of pulling the group through another paradigm shift made all the managers turn pale. I sent some email to the Mercury site asking their opinion as to whether it was up to the challenge; the response was a candid "probably not". I still sometimes think it might have worked, but the risks would have been horrendous. -- Bill Wood

jerzy.karczmarczuk@info.unicaen.fr wrote:
Andrew Coppin writes:
Ever tried implementing Haskell in Haskell? ;-)
Seriously: Haskell is a *complicated* language, needing a parser, which by itself is a non-trivial exercice.
It looks so simple on the surface... [Actually, so does cold fusion.]
Read my whole message. What I was saying (in essence) is that Prolog seemed to be performing "impossible" feats of logical deduction - until I saw a unification algorithm implemented in Haskell, and then it all made sense.
My deepest congratulations. It seems that you are the first person in my pedagogical life, who got acquainted with the unification through its implementation in Haskell.
Heh. Well. It took a *long* time, mind you... (BTW, that's a really rather good book, that.)
Having said all that, I still don't "get" what the purpose of the "cut" operator is.
Imagine that your life is a path through a tree. Depth first. At every branching you choose a branch, and you continue. When you are stuck, you remember that you have left a trail behind you, and you backtrack until the last choice point. Then you choose another path, the previous being marked as 'seen'. Well, the cut is a commitment. You cut your way back, Now, you must succeed, i.e., find a terminal node which gives you what you want, or your failure is definitive, you jump out of the system. There are hundreds of easy examples of cuts, e.g., when descending the tree consumes the ressources which cannot be restored.
So... it's the opposite of Parsec's "try" combinator?
I also failed to understand the Prolog syntax description. (What the heck is an "atom" when it's at home? I thought an atom is a unit composed of protons and electrons...)
Now, you are pulling my leg. An "atom" means that you cannot split it in constituents, this name came from Lisp into other languages, but the meaning comes from Democritus, who didn't know protons. I won't say more, just that if you want to be really funny, you should not forget neutrons.
Well, true. But they're so neutral! They don't even affect the chemical properties of the atom, only its physical ones. ;-) [Actually, I'm told this isn't *quite* true. For example, heavy water is mildly toxic due to the atom radius being slightly different or something which means that some enzyme somewhere doesn't quite work properly any more, or similar...] But anyway, I usually find that language syntax descriptions are written for maximum crypticness... This is presumably to enhance the idea that only extremely intelligent people can understand it or something.
I can certainly see why Prolog would be very useful for certain types of problems. As it happens, not the kind of problems that usually interest me. ;-)
Well, who am I or other readers to contradict it?... My point was that learning more than one language
Do you have Any Idea how many programming languages I've learned and used in my life so far? (Hint: lots.) ;-) [Hell, I was bored one day so I learned PostScript in my lunch break... Most normal people don't do that.]

Hugh Perkins writes:
Sooo.. what is the modern equivalent of Prolog?
Well, first, I wouldn't agree entirely that Prolog is "not modern". Anyway... If you want something wih more bells and whistles, modularity, coroutining, more security (less power, e.g. no program auto-modification), etc., - see Mercury. http://www.cs.mu.oz.au/research/mercury/information/features.html http://en.wikipedia.org/wiki/Mercury_(programming_language) http://www.cs.kuleuven.ac.be/~dtai/projects/ALP/newsletter/archive_93_96/net /systems/mercury1.html and also: http://lambda-the-ultimate.org/node/890 Perhaps somebody can say more about constraint languages which replaced Prolog in some contexts as well. Have fun. Jerzy Karczmarczuk

Jerzy Karczmarczuk wrote
Perhaps somebody can say more about constraint languages which replaced
Yes please! Of example, how correct is http://en.wikipedia.org/wiki/Constraint_programming?

It's fairly correct and up-to-date although I note that the constraint example 'send more money' given is stated as 'Prolog' when it really uses ECLiPSe Prolog constraint syntax (alldifferent/1, labelling/1 and '#' integer constraints): If you're really interested in constraint based languages then have a look at ECLiPSe (yes shameless plug and I'm biased ;-): http://www.eclipse-clp.org and in particular the language tutorial: http://www.eclipse-clp.org/doc/tutorial/index.html ECLiPSe is robust and mature enough for industrial application development, most notably by Cisco and CrossCore Optimization. Incidentally, we've often seen a lot of traffic on here about Sudoku solvers and I've always wanted to post the ECLiPSe solution (neat when you consider the length of the sudoku/2 predicate ;-) : % ECLiPSe sample code - Sudoku problem % % This is a puzzle, originating from Japan, where you have a % 9x9 grid, consisting of 9 3x3 sub-grids. The challenge is % to fill the grid with numbers from 1 to 9 such that every row, % every column, and every 3x3 sub-grid contains the digits 1 to 9. % Some of these numbers are given, which is the way different % instances of the problem are made. The solution is usually unique. % % Compile this file with ECLiPSe and call e.g. % :- solve(1). % % Author: Joachim Schimpf, IC-Parc % :- lib(ic). :- import alldifferent/1 from ic_global. solve(ProblemName) :- problem(ProblemName, Board), print_board(Board), sudoku(3, Board), print_board(Board). sudoku(N, Board) :- N2 is N*N, dim(Board, [N2,N2]), Board[1..N2,1..N2] :: 1..N2, ( for(I,1,N2), param(Board,N2) do Row is Board[I,1..N2], alldifferent(Row), Col is Board[1..N2,I], alldifferent(Col) ), ( multifor([I,J],1,N2,N), param(Board,N) do ( multifor([K,L],0,N-1), param(Board,I,J), foreach(X,SubSquare) do X is Board[I+K,J+L] ), alldifferent(SubSquare) ), term_variables(Board, Vars), labeling(Vars). print_board(Board) :- dim(Board, [N,N]), ( for(I,1,N), param(Board,N) do ( for(J,1,N), param(Board,I) do X is Board[I,J], ( var(X) -> write(" _") ; printf(" %2d", [X]) ) ), nl ), nl. %---------------------------------------------------------------------- % Sample data %---------------------------------------------------------------------- problem(1, []( [](_, _, 2, _, _, 5, _, 7, 9), [](1, _, 5, _, _, 3, _, _, _), [](_, _, _, _, _, _, 6, _, _), [](_, 1, _, 4, _, _, 9, _, _), [](_, 9, _, _, _, _, _, 8, _), [](_, _, 4, _, _, 9, _, 1, _), [](_, _, 9, _, _, _, _, _, _), [](_, _, _, 1, _, _, 3, _, 6), [](6, 8, _, 3, _, _, 4, _, _))). Cheers Andy Peter Verswyvelen wrote:
Jerzy Karczmarczuk wrote
Perhaps somebody can say more about constraint languages which replaced
Yes please! Of example, how correct is http://en.wikipedia.org/wiki/Constraint_programming?
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe --
* Andrew Cheadle email: a.cheadle@doc.ic.ac.uk * * Department of Computing http://www.doc.ic.ac.uk/~amc4/ * * Imperial College London * *********************************************************************

On Mon, 2007-09-03 at 07:46 +0100, Andrew Cheadle wrote: . . .
Incidentally, we've often seen a lot of traffic on here about Sudoku solvers and I've always wanted to post the ECLiPSe solution (neat when you consider the length of the sudoku/2 predicate ;-) :
Reasonably quick, too -- 50 sudokus in ca. 0.1 sec. (I did a Project Euler problem that required solving 50 sudokus with ECLiPSe). -- Bill Wood

On Mon, Sep 03, 2007 at 07:46:17AM +0100, Andrew Cheadle wrote:
% ECLiPSe sample code - Sudoku problem % % This is a puzzle, originating from Japan
I'm not sure about it. See the History section on http://en.wikipedia.org/wiki/Sudoku Apparently Japan was the place where Sudoku started to be popular and was given its current name, but it was invented elsewhere. Best regards Tomek

On Sun, 2007-09-02 at 22:52 +0800, Hugh Perkins wrote:
Sooo.. what is the modern equivalent of Prolog?
Because no one has said it quite this way: The modern equivalent of Prolog is Prolog. Most of the advancement in logic programming has either been folded back into Prolog or has been advanced within Prolog. There are, for example, constraint systems as add-ons to various Prolog implementations. The theory of logic programming, however, has massively advanced beyond Prolog producing languages much, much more beautiful than Prolog (not hard...). Unfortunately, they have not been made into practical languages, at least not successfully. I'm not aware of any real competitors to Prolog in the logic programming languages space.

On 9/3/07, Derek Elkins
Because no one has said it quite this way: The modern equivalent of Prolog is Prolog.
Ok, thanks. Just wanted to check that. (btw, just thought, when I was talking about FFI, probably meant Forth, not Prolog. FFI for Prolog probably isnt that important.)

On Mon, 2007-09-03 at 07:43 +0800, Hugh Perkins wrote:
On 9/3/07, Derek Elkins
wrote: Because no one has said it quite this way: The modern equivalent of Prolog is Prolog.
I was just about to say the same thing :-); thanks, Derek. . . .
(btw, just thought, when I was talking about FFI, probably meant Forth, not Prolog. FFI for Prolog probably isnt that important.)
No, Foreign Function Interfaces are as useful with Prolog as with any other high-level language. (BTW I thought the FFI for Forth was the Forth assembler; have things changed since FIG/F83?) I just did a fast scan and found that XSB and SWI Prolog seem to be still quite active. If you have a few bucks (or euros) sicstus is also available. I was quite satisfied with XSB, though my experience is somewhat dated now. It is somewhat idiosyncratic (they're talking about getting closer to ISO Prolog with their latest release). I have also had good results with SWI. Both of them support some CLP libraries. GNU Prolog is also out there, but I don't know how active development is (please, I said I don't know, not that I thought it was becoming moribund). I've used it a little. It also comes with something of a CLP library. It looks like you can get an individual license for sicstus for ca. 155 euros. I used it a lot about three years ago and it seemed to be quite stable, had good performance, and we received good support. Of course we were a big corporate customer. Prolog seems to be quite alive and kicking.

(BTW I thought the FFI for Forth was the Forth assembler; have things changed since FIG/F83?)
I didnt have a real PC, just a ZX Spectrum. It wasnt real Forth, just Spectrum Forth. It was kindof fun, but a little disappointing not to be able to do anything useful with it. Well, I wanted to write space invaders on it, but not being able to write characters to arbitrary points on the screen was a bit of a show-stopper for that ;-)

Hugh Perkins writes: ...
I didnt have a real PC, just a ZX Spectrum. It wasnt real Forth, just Spectrum Forth. It was kindof fun, but a little disappointing not to be able to do anything useful with it. ...
Oh, Forth on Sinclair was as decent Forth as any Forth. Indirect threaded language, with "paging" of programs, and most of the system writen in Forth itself. Nothing to be ashamed of. The Z80 processor was less adapted to this sort of interpreters, machines based on Motorola 6809 more; the Forth "inner interpreter" was there slightly more than 1 instruction... It was an excellent processor, much better that 6502. God knows why the other one made such career. In general, the languages on Spectrum, then on Apple, etc., belong to the *proud* history of comp. sci., we've got Lisp, and Prolog (the micro- ... stuff), APL, and some very exquisite Basic's. Of course, also Pascal and C. And even a computer algebra program/language (mu-simp). So, don't say that you hadn't a "real" PC. It is like saying: "I don't have a real car, only a bicycle". A bicycle is a usable device, sometimes much faster than a car. We won't rekindle the 8-bit machines, but I do not regret passing some time on them. For teaching they were much more useful than mainframes. But I am afraid that we got very far not only from Haskell, but also from café. Jerzy Karczmarczuk

Off off off topic: The Z80 DID make it! It was used in many many game consoles (the best selling Nintendo Gameboy!) and arcade machines, mostly as a secondary sound synthesiser or IO controller. See http://en.wikipedia.org/wiki/Zilog_Z80. Even when only counting the Nintendo Gameboy, the CPU got sold >100 million times... So now to get back on the real topic, we should port Haskell to the Z80 ;-) jerzy.karczmarczuk@info.unicaen.fr wrote:
I didnt have a real PC, just a ZX Spectrum. It wasnt real Forth, just Spectrum Forth. It was kindof fun, but a little disappointing not to be able to do anything useful with it. ... Oh, Forth on Sinclair was as decent Forth as any Forth. Indirect threaded language, with "paging" of programs, and most of the system writen in Forth itself. Nothing to be ashamed of. The Z80 processor was less adapted to this sort of interpreters, machines based on Motorola 6809 more; the Forth "inner interpreter" was there slightly more than 1 instruction... It was an excellent processor, much better that 6502. God knows why the other one made such career. In general, the languages on Spectrum, then on Apple, etc., belong to
Hugh Perkins writes: ... the *proud* history of comp. sci., we've got Lisp, and Prolog (the micro- ... stuff), APL, and some very exquisite Basic's. Of course, also Pascal and C. And even a computer algebra program/language (mu-simp). So, don't say that you hadn't a "real" PC. It is like saying: "I don't have a real car, only a bicycle". A bicycle is a usable device, sometimes much faster than a car. We won't rekindle the 8-bit machines, but I do not regret passing some time on them. For teaching they were much more useful than mainframes. But I am afraid that we got very far not only from Haskell, but also from café. Jerzy Karczmarczuk
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 9/2/07, Andrew Coppin
One of standard exercices in Prolog is the construction of the meta-interpreter of Prolog in Prolog. While this is cheating, I recommend it to you. It opens eyes.
Ever tried implementing Haskell in Haskell? ;-)
In many respects, Haskell is a much higher-level language than Prolog.
Before you all gasp and go
From the clauses, it is clear that the first argument must satisfy
list([]). list([X|Xs]) :- list(Xs). but the same is not true of the second and third arguments. ?- append([1,2,3], 4, Zs). Zs = [1|[2|[3|4]]] ?- Lee Naish has written in detail on this subject. Another argument in favour of Haskell being high-level is John Hughes' "glue" argument. If you don't know what I mean, go and read "Why Functional Programming Matters". Hey, that was fun. I have barely written *any* Prolog since I finished my thesis. :-) cheers, T. -- Dr Thomas Conway drtomc@gmail.com Silence is the perfectest herald of joy: I were but little happy, if I could say how much.

On Wed, 2007-09-05 at 13:21 +1000, Thomas Conway wrote:
On 9/2/07, Andrew Coppin
wrote: One of standard exercices in Prolog is the construction of the meta-interpreter of Prolog in Prolog. While this is cheating, I recommend it to you. It opens eyes.
Ever tried implementing Haskell in Haskell? ;-)
In many respects, Haskell is a much higher-level language than Prolog. Before you all gasp and go
, consider the following argument. In Prolog, you need to pay close attention to the exact order in which things are executed.
That's because Prolog is -ugly-. The only reason I recommend it is because it's archetypical and there aren't any other logic languages with anywhere near the mindshare/significance. For a thing of sheer beauty, see, e.g. LolliMon.

On 9/5/07, Derek Elkins
That's because Prolog is -ugly-. The only reason I recommend it is because it's archetypical and there aren't any other logic languages with anywhere near the mindshare/significance. For a thing of sheer beauty, see, e.g. LolliMon.
Oh, look, I quite like Prolog in some respects. Especially Nu-Prolog which has safe negation. I also highly recommend learning it. It will enrich the way you think about problems, especially if you get to know it well enough to understand how logic variables are implemented, etc. T. ps I feel obliged to put in a good word for Mercury which I worked on, along with a few other denizens in this forum. See www.mercury.cs.mu.oz.au. -- Dr Thomas Conway drtomc@gmail.com Silence is the perfectest herald of joy: I were but little happy, if I could say how much.

On Wed, Sep 05, 2007 at 01:21:52PM +1000, Thomas Conway wrote:
but to interpret this as a *program* you have to consider how it will be executed. In particular, using SLD resolution, conjunction (/\, or ',' in Prolog notation) is not commutative as it is in predicate logic.
I've always wondered why Prolog uses DFS, instead of some complete method like DFID or Eppstein's hybrid BFS... having to worry about clause order seems so out of place. Stefan

On 9/5/07, Stefan O'Rear
I've always wondered why Prolog uses DFS, instead of some complete method like DFID or Eppstein's hybrid BFS... having to worry about clause order seems so out of place.
Well, a couple of reasons are pretty well agreed in the Prolog community: 1. Order of side-effects. 2. Efficiency of implementation. and arguably 3. Hysterical Raisins. -- Dr Thomas Conway drtomc@gmail.com Silence is the perfectest herald of joy: I were but little happy, if I could say how much.

Because you can play very clever tricks with DFS to make it efficient, time
and space.
On 9/5/07, Stefan O'Rear
On Wed, Sep 05, 2007 at 01:21:52PM +1000, Thomas Conway wrote:
but to interpret this as a *program* you have to consider how it will be executed. In particular, using SLD resolution, conjunction (/\, or ',' in Prolog notation) is not commutative as it is in predicate logic.
I've always wondered why Prolog uses DFS, instead of some complete method like DFID or Eppstein's hybrid BFS... having to worry about clause order seems so out of place.
Stefan
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux)
iD8DBQFG3iVMFBz7OZ2P+dIRAig8AJ9Er4Jeur+0VWTS4D026xKlsUOU3gCfVm/t BUBNEE4JPBommIYetPL3knw= =9JL7 -----END PGP SIGNATURE-----
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

jerzy.karczmarczuk@info.unicaen.fr wrote:
Anyway, I believe strongly that ALL people who have problems with the Haskell protocole, and they are numerous, I teach a good sample of them, should be encouraged to learn Prolog. IN DEPTH, and I mean it, Andrew Coppin and Peter Hercek ! In Prolog A=B is the unification, which is a bit more than equality, and something much more aggressive than an assignment. When you REALLY understand unification, it will be easier to see the lazy instantiation of the Haskell assignment, and, additionally, it becomes much more easy to understand the automatic inference of types, which sooner or later must be harnessed by all Haskell programmers...
I did once try to learn Prolog. And failed. Miserably. I just couldn't bend my head around how the Prolog interpreter manages to make seemingly "impossible" leaps of deduction. (It's a *machine*! How can it deduce arbitrarily complex conclusions from any arbitrary set of axioms? That requires *intelligence*!) And yet, in other, seemingly identical cases, it utterly fails to deduce patently *obvious* results... really weird! And then I read a book. A golden book. (No, seriously. The cover is gold-coloured.) It was called "The Fun of Programming". And it demonstrates how to write a Haskell program that performs exactly the same "impossible" feats. And now, finally, it makes sense. (I still have no idea what the hell all that business with the "cut" operator is though...)

Andrew Coppin wrote:
OK, so it's only tangentally related, but... do you have *any idea* how many times I've written something like
let x = (some complex function of x) in (some other complex function of x)
when in fact what I *meant* to do was type x' instead of x?!
I try not to use primes (x', x'', etc.) on variables for exactly this reason, and instead try to use more descriptive names, such as "newx", or "y", or whatever. Of course you can still make typing mistakes, but that's always the case... -Paul

Peter Hercek wrote:
So the question is what am I missing? Any nice use cases where fixed point search is so good that it is worth the trouble with figuring out new and new variable names for essentially the same stuff?
When I write functional code, I do find myself writing recursions much more often than writing imperative-wannabe assignments. I appreciate that Haskell's "let" defaults to recursion. I don't appreciate that OCaml makes a distinction between "let" and "letrec", since every time I change a non-recursive definition to a recursive one, I am prone to forget to change "let" to "letrec", IOW it is a hidden hazard to maintenance and evolution. When I write imperative code in Haskell, the notation is so different from functional code that "let" doesn't even come into the equation. When I write imperative code in imperative languages, my mental model treats "x:=x+1" as "x'=x+1 and y'=y and z'=z and ...", following several treatises on imperative semantics(*). Going back to functional programming, when I do write imperative-wannabe assignments, I totally like having names x, x', x'', etc., since they're in my head anyway. Underlying all this is probably the soberness of recognizing that "=" is not ":=". (*) Such as: Eric C. R. Hehner, "A Practical Theory of Programming". First edition Springer 1993. Current edition at http://www.cs.toronto.edu/~hehner/aPToP/ C. A. R. Hoare and He Jifeng, "Unifying Theories of Programming". Prentice Hall 1998. The Z specification language.

F# and Concurrent Clean introduced special syntax for doing this. Basically they just invent new names for you. In Haskell (warning: I'm a newbie, so take this with a grain of salt), I guess you just use monads if you want to pass a value from one function to another under some context, or you could just make your own little much simpler combinator like: infixl 0 \> -- I just took the first weird symbol combination that came to mind, this does not mean anything (I hope ;-) x \> fx = fx x f x = x * scale \> \x -> x + transform \> \x -> g x like this you don't have to invent new names, and you don't have to type much more. I'm sure this silly sequencing operator must already exist in the library somewhere? -----Original Message----- From: haskell-cafe-bounces@haskell.org [mailto:haskell-cafe-bounces@haskell.org] On Behalf Of Peter Hercek Sent: Thursday, August 30, 2007 6:18 PM To: haskell-cafe@haskell.org Subject: [Haskell-cafe] let and fixed point operator Hi, I find the feature that the construct "let x = f x in expr" assigns fixed point of f to x annoying. The reason is that I can not simply chain mofifications a variable like e.g. this: f x = let x = x * scale in let x = x + transform in g x When one is lucky then it results in a compile error; in worse cases it results in stack overflow in runtime. The annoying part is figuring out new and new variable names for essentially the same thing to avoid the search/evaluation of the fixed point. I suppose Haskell was designed so that it makes sense. The only usage I can see is like this: let fact = \x -> if x == 0 then 1 else x * fact (x-1) in ... but that is not any shorter than: let fact x = if x == 0 then 1 else x * fact (x-1) in So the question is what am I missing? Any nice use cases where fixed point search is so good that it is worth the trouble with figuring out new and new variable names for essentially the same stuff? Peter. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, 30 Aug 2007, Peter Verswyvelen wrote:
infixl 0 \> -- I just took the first weird symbol combination that came to mind, this does not mean anything (I hope ;-)
x \> fx = fx x
f x = x * scale \> \x -> x + transform \> \x -> g x
like this you don't have to invent new names, and you don't have to type much more.
I'm sure this silly sequencing operator must already exist in the library somewhere?
Sure, its name is (>>=). It must be used for the Identity monad, as mentioned by Derek Elkins earlier in this thread.
participants (30)
-
ajb@spamcop.net
-
Albert Y. C. Lai
-
Andrew Cheadle
-
Andrew Coppin
-
Benjamin Franksen
-
Bill Wood
-
Brandon S. Allbery KF8NH
-
Brent Yorgey
-
Chaddaï Fouché
-
Dan Piponi
-
David Roundy
-
Derek Elkins
-
Hans van Thiel
-
Henning Thielemann
-
Hugh Perkins
-
jerzy.karczmarczuk@info.unicaen.fr
-
Johan Grönqvist
-
Jules Bean
-
Ketil Malde
-
Lennart Augustsson
-
Mitar
-
Neil Davies
-
ok
-
Paul Hudak
-
Peter Hercek
-
Peter Verswyvelen
-
Stefan O'Rear
-
Sterling Clover
-
Thomas Conway
-
Tomasz Zielonka