clarity and complexity of Haskell code?

I always enjoy and tout the clarity and simplicity of the declarative style of functional programming, and with that also Haskell. But it seems that although the wonderfully short and clear examples dominate early learning and usage, the forums like this and café are dominated by examples more like this: import Data.Enumerator (run_, ($$), (=$)) import Data.Enumerator.Binary (enumHandle, iterHandle) import Data.Enumerator.List as EL (map) import Data.ByteString as B (map) import Data.Bits (complement) import System.IO (withFile, IOMode(..)) main = withFile "infile" ReadMode $ \inh -> withFile "outfile" WriteMode $ \outh -> do run_ (enumHandle 4096 inh $$ EL.map (B.map complement) =$ iterHandle outh) And many more - looks more like APL or Perl to me. :) Certainly not the Python'ish model of "anyone can read this, and it is clear what it does". Seems like the assertion that FP is easier and more clear is true at introductory levels, but there are some subsequent big gradients in the learning and usage curve. I use it only in very simple and small programs, so wondered about the observations on this from more experienced experts. I wonder if any studies have been done to quantify and measure the complexity of programs in Haskell, as a way to assess this property.

On Sun, Sep 25, 2011 at 10:25:25AM -0500, Gregory Guthrie wrote:
I always enjoy and tout the clarity and simplicity of the declarative style of functional programming, and with that also Haskell.
But it seems that although the wonderfully short and clear examples dominate early learning and usage, the forums like this and café are dominated by examples more like this:
import Data.Enumerator (run_, ($$), (=$)) import Data.Enumerator.Binary (enumHandle, iterHandle) import Data.Enumerator.List as EL (map)
import Data.ByteString as B (map) import Data.Bits (complement) import System.IO (withFile, IOMode(..))
main = withFile "infile" ReadMode $ \inh -> withFile "outfile" WriteMode $ \outh -> do run_ (enumHandle 4096 inh $$ EL.map (B.map complement) =$ iterHandle outh)
And many more - looks more like APL or Perl to me. :) Certainly not the Python'ish model of "anyone can read this, and it is clear what it does".
The promise of FP is certainly not that "anyone can read this and it is clear what it does". That's not really possible with any paradigm. However, the above code to me does indeed seem wonderfully short and clear: it opens an input and output files, then streams the contents of the input file to the output file, while complemeting all the bytes. You must at least agree that it is short. And clarity depends a lot on your background. If you tried writing this program in most mainstream languages I think you would find it to be longer and more complex.
Seems like the assertion that FP is easier and more clear is true at introductory levels, but there are some subsequent big gradients in the learning and usage curve. I use it only in very simple and small programs, so wondered about the observations on this from more experienced experts.
I think the perception of a big gradient in the learning and usage curve is often due to the challenge of learning an entirely new paradigm. The learning and usage curve was quite steep for the first programming language each of us learned, but we have mostly forgotten about it by now.
I wonder if any studies have been done to quantify and measure the complexity of programs in Haskell, as a way to assess this property.
Not that I am aware of. -Brent

Hi.
On 25 September 2011 18:10, Brent Yorgey
You must at least agree that it is short.
Not trying to start language wars here, but it is not terribly short for what it does. The following code does the same thing in C#, and isn't far longer. And it has more or less a one-to-one correspondence to the given Haskell code; open a file for reading, open a file for writing, read some number of bytes, apply the transformation, write it to the output file. Flushing the input/output buffers and closing the files are handled by the using construct, similar to withFile in the Haskell example. int chunksize = 4096; using (var r = new BinaryReader(File.OpenRead("infile"))) using (var w = new BinaryWriter(File.OpenWrite("outfile"))) for (var buffer = r.ReadBytes(chunksize); buffer.Length > 0; buffer = r.ReadBytes(chunksize)) w.Write(Array.ConvertAll(buffer, p => (byte) ~p)); I think the habit of using quite a few operators in Haskell does make the learning curve steeper. I am not trying to say that the C# code is *better. *Just that the Haskell code is not terribly short in this case and it can be a bit cryptic for a newbie. Best, Ozgur

Haskell is designed for heavy computational lifting; your example is
not an example of major computational power.
So, of course, Haskell code will not be short in ALL cases. :)
Quoting Ozgur Akgun
Hi.
On 25 September 2011 18:10, Brent Yorgey
wrote: You must at least agree that it is short.
Not trying to start language wars here, but it is not terribly short for what it does. The following code does the same thing in C#, and isn't far longer. And it has more or less a one-to-one correspondence to the given Haskell code; open a file for reading, open a file for writing, read some number of bytes, apply the transformation, write it to the output file. Flushing the input/output buffers and closing the files are handled by the using construct, similar to withFile in the Haskell example.
int chunksize = 4096; using (var r = new BinaryReader(File.OpenRead("infile"))) using (var w = new BinaryWriter(File.OpenWrite("outfile"))) for (var buffer = r.ReadBytes(chunksize); buffer.Length > 0; buffer = r.ReadBytes(chunksize)) w.Write(Array.ConvertAll(buffer, p => (byte) ~p));
I think the habit of using quite a few operators in Haskell does make the learning curve steeper.
I am not trying to say that the C# code is *better. *Just that the Haskell code is not terribly short in this case and it can be a bit cryptic for a newbie.
Best, Ozgur

As the guy that wrote that original snippet, II could always have done
this in an imperative style using lazy IO and it would have been
clearer. But this way is just superior in terms of resources and
extensibility. Someone wanted to know what the best way to do this in
haskell is and I think that is it. This is no different than using a
library in another language.
I will admit it took me awhile to learn how to use enumerator. But
once I did I found it to be an absolutely amazing way to think about
many common problems that I do. What I like about it is that in this
case, I only needed to do one thing to the data. This took me about 2
minutes to write and it worked on the first try. If there were an
database system that used the enumerator library, I would be able to
do much of my daily workload in haskell.
Do I wish the enumerator library would be a little easier to use? Yes
I do. Is that even possible? I don't know.
On Sun, Sep 25, 2011 at 5:07 PM,
Haskell is designed for heavy computational lifting; your example is not an example of major computational power.
So, of course, Haskell code will not be short in ALL cases. :)
Quoting Ozgur Akgun
: Hi.
On 25 September 2011 18:10, Brent Yorgey
wrote: You must at least agree that it is short.
Not trying to start language wars here, but it is not terribly short for what it does. The following code does the same thing in C#, and isn't far longer. And it has more or less a one-to-one correspondence to the given Haskell code; open a file for reading, open a file for writing, read some number of bytes, apply the transformation, write it to the output file. Flushing the input/output buffers and closing the files are handled by the using construct, similar to withFile in the Haskell example.
int chunksize = 4096; using (var r = new BinaryReader(File.OpenRead("infile"))) using (var w = new BinaryWriter(File.OpenWrite("outfile"))) for (var buffer = r.ReadBytes(chunksize); buffer.Length > 0; buffer = r.ReadBytes(chunksize)) w.Write(Array.ConvertAll(buffer, p => (byte) ~p));
I think the habit of using quite a few operators in Haskell does make the learning curve steeper.
I am not trying to say that the C# code is *better. *Just that the Haskell code is not terribly short in this case and it can be a bit cryptic for a newbie.
Best, Ozgur
_______________________________________________ Beginners mailing list Beginners@haskell.org http://www.haskell.org/mailman/listinfo/beginners

On Sun, Sep 25, 2011 at 10:05 PM, Ozgur Akgun
Hi.
On 25 September 2011 18:10, Brent Yorgey
wrote: You must at least agree that it is short.
Not trying to start language wars here, but it is not terribly short for what it does. The following code does the same thing in C#, and isn't far longer. And it has more or less a one-to-one correspondence to the given Haskell code; open a file for reading, open a file for writing, read some number of bytes, apply the transformation, write it to the output file. Flushing the input/output buffers and closing the files are handled by the using construct, similar to withFile in the Haskell example. int chunksize = 4096; using (var r = new BinaryReader(File.OpenRead("infile"))) using (var w = new BinaryWriter(File.OpenWrite("outfile"))) for (var buffer = r.ReadBytes(chunksize); buffer.Length > 0; buffer = r.ReadBytes(chunksize)) w.Write(Array.ConvertAll(buffer, p => (byte) ~p));
Note that this code is pretty close to FP already (except the "for" loop which is where the iteratees/enumerator that present the main difficulty in Haskell trying to do the FP equivalent intervene) : the "using" is pretty declarative, you use a closure and a higher order function... -- Jedaï
participants (6)
-
Brent Yorgey
-
caseyh@istar.ca
-
Chaddaï Fouché
-
David McBride
-
Gregory Guthrie
-
Ozgur Akgun