Re: [Haskell] What's up with this Haskell runtime error message:

Thanks so much for your help. I should have made clear that I was aware that
the definitions were mutually dependent. What I was hoping was that Haskell
could solve this for me without my having to resort to effectively finessing
any sequencing considerations.
Perhaps I am really asking it to do too much.
This I thought might be reasonable since one is supposed to be achieving a
sequence-less style of programming. But this would seem to be a counter
example where I will be essentially forced to implement a sequential
processing semantic in a language environment which ostensibly would deny me
such (for my own good I understand).
Thoughts?
On 4/6/06, Garrett Mitchener
I came up with a system of coloring -- you'll have to view this message as html to see it. You start with the input parameters -- those are green. Anything defined in terms of green is blue. Anything defined in terms of green & blue is purple. Anything defined in terms of green, blue, and purple is orange, etc. Then you can see the problem. So in foo, everything checks out, you get to orange and there's just a couple of expressions left and you can see that there's no loop.
-----------------------------------------------------------------------------------------------------------
foo (step,r0,mu0) = bar ( step,r1,r0,mu1,mu0) where r1 = r0- step*rd mu1 = mu0-step*mud
rd = c*c*mu0 mud = c*c/r0 - (foobar_r z)/c
c = baz(z) z = 6.378388e6-r0
baz z | z<125 = -0.25*z+1537.5
| otherwise = 0.0169*z+1504.1
foobar_r z | z<125 = 0.25
| otherwise = -0.0169
But when you try coloring bar, you get this far and then we're stuck. The problem is clear: r depends on rdc, which depends on rd0, which depends on c0, which depends on z0, which depends on r. So your definition for r includes a loop.
bar
(step,r2,r1,mu2,mu1) = (r,z0) : bar (step,r1,r,mu1,m) where r = r2 +2*step*rdc m = mu2+2*step*mudc rdc = ( rd2+rd1+rd0)/6 mudc = (mud2+mud1+mud0)/6
rd2 = c2*c2*mu2 mud2 = c2*c2/r2 - (foobar_r z2)/c2
rd1 = c1*c1*mu1 mud1 = c1*c1/r1 - (foobar_r z1)/c1
rd0 = c0*c0*m mud0 = c0*c0/r - (foobar_r z0)/c0
c2 = baz(z2)
c1 = baz( z1) c0 = baz(z0)
z2 = 6.378388e6-r2 z1 = 6.378388e6 -r1 z0 = 6.378388e6-r
main :: IO () main = do print $ take 100 (foo (0.1, 6.378388e6,0))

On 2006-04-06 at 11:25EDT "Michael Goodrich" wrote:
Thanks so much for your help. I should have made clear that I was aware that the definitions were mutually dependent. What I was hoping was that Haskell could solve this for me without my having to resort to effectively finessing any sequencing considerations.
Perhaps I am really asking it to do too much.
I think so. Haskell doesn't do anything magic; it's a programming language rather than a mathematical amanuensis, so you have to say what the algorithm is. In the kind of thing you were attempting, ⊥ is a valid solution to the equations, as in f a = g a + 1 g b = f a - 1 where, while f x = x+x + 1, g x = (x+x+1) - 1 is a valid solution, so are many others, and in particular so is f x = ⊥, g x = ⊥, which also has the merit of being simpler. If you want an iteration over values to happen, you have to say where to start and how to get from one approximation to the next (if you don't, how would the compiler choose for you?), and Haskell is very good at this, as described in the paper by John Hughes that someone posted earlier. Jón -- Jón Fairbairn Jon.Fairbairn at cl.cam.ac.uk

Thanks so much for your help. I should have made clear that I was aware that the definitions were mutually dependent. What I was hoping was that Haskell could solve this for me without my having to resort to effectively finessing any sequencing considerations.
Haskell is a functional language. The program you are trying to solve sounds like a logic problem if I understand it correctly. There is a functional logic programming language called Curry [1] that has similar syntax to Haskell, but a different evaluation approach. (Note: I don't know anything about Curry other than what I read on the front page of their website, and the fact that someone wrote a Sudoku solver by writing the constraints and having the compiler generate a program that solves the puzzle.)
Perhaps I am really asking it to do too much.
What you want is a program that looks for a set of values that satisfy a certain set of mathematical relations (you mentioned scientific computing). As I understand it, this is where logic programming shines. (If you can turn this into a system of equations, Mathematica might be able to solve it, btw.)
This I thought might be reasonable since one is supposed to be achieving a sequence-less style of programming.
I never really heard Haskell described this way. I've heard Haskell described as a declarative programming language, where you shouldn't worry about the order of execution of your functions, but I never thought it meant anything about complete sequence-less-ness; it seems very rooted in deterministic evaluation.
But this would seem to be a counter example where I will be essentially forced to implement a sequential processing semantic in a language environment which ostensibly would deny me such (for my own good I understand).
In fact, Haskell is big on sequencing. One of the key features of Haskell is the monad [2]; you might call it sequencing done right. I'm not sure how monads would help you (if you can express your code imperatively you are good to go, but it sounds like you want to keep things declarative).
Thoughts?
Someone who knows about logic programming might point you to good resources about this perspective, if it applies (other than Curry [1]), which looks pretty fun. Hope that helps, Jared. [1] http://www.informatik.uni-kiel.de/~curry/ [2] http://www.nomaware.com/monads/html/ -- http://www.updike.org/~jared/ reverse ")-:"

On Apr 6, 2006, at 11:25 AM, Michael Goodrich wrote:
Thanks so much for your help. I should have made clear that I was aware that the definitions were mutually dependent. What I was hoping was that Haskell could solve this for me without my having to resort to effectively finessing any sequencing considerations.
Perhaps I am really asking it to do too much.
This I thought might be reasonable since one is supposed to be achieving a sequence-less style of programming. But this would seem to be a counter example where I will be essentially forced to implement a sequential processing semantic in a language environment which ostensibly would deny me such (for my own good I understand).
Thoughts?
[snip a bunch of code] I'm not an expert, but I'll take a crack at this. What you seem to want is a numeric fixpoint, wherein each variable in the mutually recursive set of equations is assigned a value that causes all equations to hold. This is called a fixpoint because it represents a point where the function in n-space generated by the n equations maps to itself. The notion of a fixpoint usually found in functional programming languages is a little different -- it is specifically the "least fixed point". Now, I'm getting a little out of my depth here, but I think that the Kleene fixpoint theorem tells us that the least fixpoint of the kind of function in the previous paragraph must be bottom - ie, non-termination (which, GHC can sometimes detect, in which case it prints the <<loop>> error you saw). You can't use the least fixpoint mechanism that the programming language gives you to calculate the those kind of numeric fixpoints, because the least fixpoint is completely uninteresting and not what you want. In haskell, the standard techinque for this kind of thing is to calculate an "infinite" list of successive approximations to the answer and choosing an element from the list according to some criteria (usually absolute or relative convergence of successive elements). You can easily build such a list with 'Data.List.iterate'. This should work for all attractive fixpoints of the function when given an appropriate initial approximation (modulo floating point rounding errors, as always). Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

Thank you all for your excellent comments. I was aware of Hughe's technique and the 'whyfp' paper. I had often thought that some kind of iteration would be necessary to reolve this, and to overcome initially this I was cheating and using one of the values one step late in order to break the closed cycle of references, in my C and Prolog protype versions. This cheat bothered me to the extent that I decided to fix it and that is when I decided that I might be missing the boat, and Haskell might have a built in solution to closed sets of references. But I see now in retrospect that this was a misguided (could I say silly ? ;-) notion and that indeed some kind of numerical algorithm will have to be effected to get the 'pure' solution. Again, many thanks to all, and I learned something valuable. My faith in Haskell and FP has been restored (assuming it ever did actually waiver) cheers, -Mike
participants (4)
-
Jared Updike
-
Jon Fairbairn
-
Michael Goodrich
-
Robert Dockins