
What material benefit does Haskell derive from being a "pure" functional language as opposed to an impure one? Here is my list of benefits of purity (some of them are enhanced by other features like the type system). Purity means referential transparency. that means that the programmer has no way to modify pure data. That cut the tree of possible programmer errors.In particular, since there are no variables, every state change must be in the form o a call to a function with new parameters. each function has no state change (except when needed and then the type system labels the stateful code as such)( This enforcement goes in the righ direction for clarity and readability, maintainability, modularity xxxxxbility. Purity also eases paralel processing, since no data is modified, each process is isolated better. Less opportunities for errors. Purity permits lazines because since the execution tree of an expression has pure, data, can be evaluated later, since it will not be changed. Lazy evaluation eases mathematical reasoning,because mathematics has no notion of eager evaluation, but to make use of mathematical equations when the calculation is needed. . Mathematical reasoning permits the full use of a long tradition of mathematical knowledge. This makes code more simple, understandable, general, proof guarantted and elegant (for those that know the mathematical domain). This also permits high level optimization of code, both by the programmer and the compiler. for sure there are a few more We are superstitious and guided by "nonrational" ideas such is beauty. but: bauauty -> (simplicity -> (less effort to master, less errors) , utility ->, solve more problems, solve greater problems )

One more advantage that is not frequently cited
Purity permits to pass every parameter of a procedure by reference (giving
the pointer) rather that by value giving a copy, and still be sure that the
data has not been modified. Besides the safety. this is great language
optimization itself.
2009/12/10 Alberto G. Corona
What material benefit does Haskell derive from being a "pure" functional language as opposed to an impure one?
Here is my list of benefits of purity (some of them are enhanced by other features like the type system).
Purity means referential transparency. that means that the programmer has no way to modify pure data. That cut the tree of possible programmer errors.In particular, since there are no variables, every state change must be in the form o a call to a function with new parameters. each function has no state change (except when needed and then the type system labels the stateful code as such)( This enforcement goes in the righ direction for clarity and readability, maintainability, modularity xxxxxbility.
Purity also eases paralel processing, since no data is modified, each process is isolated better. Less opportunities for errors.
Purity permits lazines because since the execution tree of an expression has pure, data, can be evaluated later, since it will not be changed.
Lazy evaluation eases mathematical reasoning,because mathematics has no notion of eager evaluation, but to make use of mathematical equations when the calculation is needed. .
Mathematical reasoning permits the full use of a long tradition of mathematical knowledge. This makes code more simple, understandable, general, proof guarantted and elegant (for those that know the mathematical domain). This also permits high level optimization of code, both by the programmer and the compiler.
for sure there are a few more
We are superstitious and guided by "nonrational" ideas such is beauty. but:
bauauty -> (simplicity -> (less effort to master, less errors) , utility ->, solve more problems, solve greater problems )

And that would be true if everything were strict and not partially evaluated
sometimes :-)
My understanding is the following... (and I could be way off)
Remember that a function of arity N is really N functions of arity 1 with
their arguments bound one at a time to create a new function along the way.
At least you *can* write your code that way if you really want to, but the
laziness might make it such that all the parameters are not bound, due to
not being "needed" at that moment. Instead a thunk or what some other
languages might call a "Future" is put in the argument's place, to be
evaluated when needed.
If Haskell were strict by default, I think your claim of passing references
around could be true, but my experience with Haskell has been that sometimes
it's too lazy for me to write the code that I first thought would be
efficient without a lot f study and refactoring of that code.
I'm sure this gets easier with practice, but it's not something I was
expecting to be as difficult as it all was.
Dave
On Thu, Dec 10, 2009 at 5:57 AM, Alberto G. Corona
One more advantage that is not frequently cited
Purity permits to pass every parameter of a procedure by reference (giving the pointer) rather that by value giving a copy, and still be sure that the data has not been modified. Besides the safety. this is great language optimization itself.
2009/12/10 Alberto G. Corona
What material benefit does Haskell derive from being a "pure" functional language as opposed to an impure one?
Here is my list of benefits of purity (some of them are enhanced by other features like the type system).
Purity means referential transparency. that means that the programmer has no way to modify pure data. That cut the tree of possible programmer errors.In particular, since there are no variables, every state change must be in the form o a call to a function with new parameters. each function has no state change (except when needed and then the type system labels the stateful code as such)( This enforcement goes in the righ direction for clarity and readability, maintainability, modularity xxxxxbility.
Purity also eases paralel processing, since no data is modified, each process is isolated better. Less opportunities for errors.
Purity permits lazines because since the execution tree of an expression has pure, data, can be evaluated later, since it will not be changed.
Lazy evaluation eases mathematical reasoning,because mathematics has no notion of eager evaluation, but to make use of mathematical equations when the calculation is needed. .
Mathematical reasoning permits the full use of a long tradition of mathematical knowledge. This makes code more simple, understandable, general, proof guarantted and elegant (for those that know the mathematical domain). This also permits high level optimization of code, both by the programmer and the compiler.
for sure there are a few more
We are superstitious and guided by "nonrational" ideas such is beauty. but:
bauauty -> (simplicity -> (less effort to master, less errors) , utility ->, solve more problems, solve greater problems )
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

David, think of the machine as being the earth and laziness is in the clouds. Strict evaluation is closer to the machine. The relationship between a lazy algorithm and the earth is abstract; hence, it will make creating algorithms especially efficient ones difficult. All of this is still a work in progress. The Haskell creed appears to be, This is the way so stick to it! The idea appears to be that by sticking to the program the problems will be overcome in time and we will be left with all the glorious goodness. At one time it was not possible to achieve the sort of efficiency that Haskell achieves as a matter of course.

I understand that this is very much a work-in-progress. But we have to also
come to the realization that there's people forming "industrial groups" and
such around Haskell, and trying very earnestly to show that it's worth
looking into for serious practical applications.
I do believe that it's important to point out where there is work that needs
to be done to meet all the goals that Haskell wants to achieve such that the
selling points aren't possibly construed by the audience as disingenuous
claims.
Now having said that, this is *not* meant to be a slap in the face of those
who want Haskell to be used in a practical way *now*. I am in fact one of
them, having created code that has shipped and will continue to ship (at
least until I'm forced to rewrite it... let's hope not) in Haskell in
management systems that may, if our plans work out, be deployed potentially
all over the world.
Dave
On Thu, Dec 10, 2009 at 6:50 AM, John D. Earle
David, think of the machine as being the earth and laziness is in the clouds. Strict evaluation is closer to the machine. The relationship between a lazy algorithm and the earth is abstract; hence, it will make creating algorithms especially efficient ones difficult. All of this is still a work in progress. The Haskell creed appears to be, This is the way so stick to it! The idea appears to be that by sticking to the program the problems will be overcome in time and we will be left with all the glorious goodness. At one time it was not possible to achieve the sort of efficiency that Haskell achieves as a matter of course. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, Dec 10, 2009 at 9:50 AM, John D. Earle
in progress. The Haskell creed appears to be, This is the way so stick to it! The idea appears to be that by sticking to the program the problems will be overcome in time and we will be left with all the glorious goodness. At
I think this is a bit of a misnomer. My perception of a "Haskell creed", if there really is one, is something more along the lines of: "Purity (also laziness, amongst other things) gives us some desirable properties. Let's see how far we can go with these ideas and if we can use them to solve practical problems." The Haskell community is not trying to shove these ideas down your throat. We're just interested in exploring them. Before monadic I/O was introduced, the absence of side effects made practical applications clumsy if not impossible [1]. But the Haskell researchers persisted. It just so happens that now the industry seems to be taking note of what the Haskell community has accomplished with careful adherence to these ideas. [1] http://research.microsoft.com/en-us/um/people/simonpj/papers/history-of-hask...

On Dec 11, 2009, at 3:50 AM, John D. Earle wrote:
David, think of the machine as being the earth and laziness is in the clouds.
It reads so much better as "laziness is in the heavens".
Strict evaluation is closer to the machine.
It doesn't have to be. Graph reduction hardware has been built in the past and could be again.
The relationship between a lazy algorithm and the earth is abstract; hence, it will make creating algorithms especially efficient ones difficult.
All programming (except possibly assembly coding, and given the existence of optimising assemblers, even that) is abstract. In fact it had better be abstract, because I don't want to write different programs for SPARCs, PowerPCs, x86s, &c. I'd really rather not even write different programs for 32-bit and 64-bit environments. Given the massive amounts of stuff happening in modern processors (like the dynamic register renaming &c in x86s, deep write buffers, out-of-order execution, short SIMD vector instructions), even simple C code is a *long* way from what is really happening. To use your analogy, if the machine is the earth, C programming is floating around in a balloon _just under_ the clouds. Paradoxically, working at an abstract level can make creating efficient algorithms MUCH EASIER. Recently, I was looking at some Java code from a new book about numerical algorithms for scientists and engineers in Java. Because I have lots of uses for the Singular Value Decomposition, I chose to look at the SVD code. Version speed reason Java from book 1 rewritten in C 2 a[i][j] doesn't indirect twice transpose the matrices 4 goes with the grain of memory use LAPACK 7 algorithm, blocking for cache? Working at an abstract level ("I want (u,s,v) = svd a") instead of rolling my own low level code means that I can get _faster_ code. (There's no reason why Java couldn't call LAPACK through JNI, and if I had much numeric code to do in Java, that's what I'd do.) With the SVD (and similar things) as basic building blocks, it's much easier to develop interesting algorithms. For example, if I wanted to write my own code for Correspondence Analysis, it would be far more sensible to develop the algorithm in R (the free S) or Matlab or Octave, and _not_ write my own array loops, than to write in C. I'd probably get something much faster. Oh, and there's a new SVD algorithm being worked on for LAPACK, which is available for real numbers only in the current LAPACK release. Supposedly it's not only more accurate but faster. Change one line in my code to call that, and I get a very nice benefit from abstraction!
All of this is still a work in progress. The Haskell creed appears to be, This is the way so stick to it!
There is no Haskell "creed". People use Haskell for different reasons. Many Haskell programmers are totally pragmatic about it. Remember, PROGRAMMING IS HARD. If Haskell lets people _write_ correct programs faster, that's an enormous advantage, even if they don't _run_ faster. Once you have something actually working, you can worry about speed. The Java code I mentioned above came from a *recently* published book. Here's someone willing to put up with a factor of 7 loss of performance in order to get the other benefits of Java and persuading a publisher that a lot of other people will be happy with it too.
The idea appears to be that by sticking to the program the problems will be overcome in time and we will be left with all the glorious goodness. At one time it was not possible to achieve the sort of efficiency that Haskell achieves as a matter of course.
Abstraction makes it comparatively easy to switch over from String to ByteString. Purity means that taking advantage of multicore machines is already much easier than in C (with pthreads), C++ (with TBB), or Java (with itself). To be honest, I don't expect getting high performance out of Haskell to ever be easy. But then, getting high performance out of C or Fortran isn't easy either.
participants (5)
-
Alberto G. Corona
-
David Leimbach
-
John D. Earle
-
MightyByte
-
Richard O'Keefe