Tutorial: Haskell for the Evil Genius

I've gotten mixed feedback from Reddit for my tutorial. It provides an overview of how functional and declarative programming in Haskell empower baddies, increasing the accuracy and efficiency of their atomic superweapons. What do you guys think of my tutorial, Haskell for the Evil Genius http://www.yellosoft.us/evilgenius/? Cheers, Andrew Pennebaker www.yellosoft.us

On Fri, Sep 14, 2012 at 12:13:15PM -0400, Andrew Pennebaker wrote:
I've gotten mixed feedback from Reddit for my tutorial. It provides an overview of how functional and declarative programming in Haskell empower baddies, increasing the accuracy and efficiency of their atomic superweapons. What do you guys think of my tutorial, Haskell for the Evil Genius http://www.yellosoft.us/evilgenius/?
Hopefully you'll get more feedback here, although my recent post here soliciting feedback netted me nothing. FWIW, my feedback is below. Alex Under Declarative, you aren't creating a "named expression, 2 + 2", really. You are redefining (+). Under Lazy, your example of binding fib 30 is not a good example of memoization. With memoization you typically call the underlying computation the first time, and memoize it for repeated retrieval later, not hardwire in values at compile time. Here you never ever call the real fib at all. On top of everything else, it'd be too easy to introduce a typo into one of your hardwired constant values. Under Infinite, you should use "sieve (n:ns)" pattern matching instead of calling head. Under Type-Safe Subtle distinction, but returning () is not the same as returning nothing at all. s/ommitted/omitted/ You've got an unusual indentation scheme. Consider adopting a more standard one for your tutorial. In general, monotonically decreasing is not sufficient to prove you will hit a base case. For example, decreasing by 5 would still be monotonically decreasing, and could jump right over your base cases. (Not arguing that the code is incorrect, but rather than your explanation is a little misleading/incomplete.) Again, further confusion about what memoization really is. Under Records "Functions are already defined by their data structures; they are already semantically bundled..." doesn't seem to make sense. "... acts on the specific constructor, blasting fools, murdering crowds..." makes it sound like fireOn actually has side effects.

A summary of the changes I've included so far:
Under Declarative, you aren't creating a "named expression, 2 + 2", really. You are redefining (+).
Noted and reflected in the new version.
Under Lazy, your example of binding fib 30 is not a good example of memoization. With memoization you typically call the underlying computation the first time, and memoize it for repeated retrieval later, not hardwire in values at compile time. Here you never ever call the real fib at all. On top of everything else, it'd be too easy to introduce a typo into one of your hardwired constant values.
Noted and reflected in the new version. After several comments to this effect, I do not want to misrepresent memoization in the tutorial. Sometimes it is useful to be slightly inaccurate in a tutorial in order to help bridge the gap between someone with no experience in a something vs the wealth of knowledge and complexity in the thing itself. This is not one of those times, and fortunately, fixing the misrepresentation in my tutorial is as easy as removing the hardcoded call. One thing I want to double check is that Haskell does, in fact, automatically memoize all pure function calls. Is this true? I would still like to provide a performance comparison of the Fibonacci code with and without memoization, for readers who enjoy numerical evidence, using the Unix "time" command, but I don't know how to turn memoization off. I guess I would have to reimplement the algorithm in a way that can't make use of memoization. Any suggestions? Under Infinite, you should use "sieve (n:ns)" pattern matching instead of
calling head.
Thank you! I try to make use of Haskell pattern matching wherever I can. Since I needed to refer to the whole list, as well as the head and tail, I originally used "head" instead of pattern matching. Noted and reflected in the new version.
Under Type-Safe Subtle distinction, but returning () is not the same as returning nothing at all.
True. Noted and reflected in the new version. What's the Haskell name for () again? I fear explaining the exact type information of IO () may be too much for a brief introduction to Haskell to cover.
s/ommitted/omitted/
Noted and reflected in the new version.
You've got an unusual indentation scheme. Consider adopting a more standard one for your tutorial.
I'm in the camp of hard tabs rather than soft tabs, and that preference is probably responsible for much of the difference in indentation scheme. Unfortunately, HTML is terrible at representing hard tabs in <PRE> code with a custom width preference; they all come out looking like some idiot pressed space bar eight times. I could use JavaScript to remedy this, but for now, I like to directly copy and paste my working code from .HS files into <PRE> tags just in case. If tabs are *not* the issue, then maybe I'm not indenting far enough to the right for some tastes? Or maybe it's my tendency to put "where" on its own line, something a Perl obfuscater would detest. I dunno. If someone would suggest a more idiomatic indentation scheme for my code so that I know exactly what is different, I can take a look.
In general, monotonically decreasing is not sufficient to prove you will hit a base case. For example, decreasing by 5 would still be monotonically decreasing, and could jump right over your base cases. (Not arguing that the code is incorrect, but rather than your explanation is a little misleading/incomplete.)
Understood. Noted and reflected in the new version. Incidentally, when will Nat be available in Haskell? The Fibonacci algorithm is designed to work only over natural numbers, and the best way to express this in the type system is with Nat. But this type doesn't seem available out-of-the-box for Haskell users. Unless I'm using my Haskell Platform (GHC 7.0.3) is slightly outdated. Eh?
Again, further confusion about what memoization really is.
Under Records
"Functions are already defined by their data structures; they are already semantically bundled..." doesn't seem to make sense.
Noted and reflected... I'm trying to convey to an audience largely composed of Java and C++ fanatics how Haskell records are much better than OOP, how GADTs are more intuitive, robust, ... OOP doesn't even compare! That's what I'm trying to get across in that bit. And it's hard to do this in just a few sentences, especially when the reader isn't even familiar with GADTs in the first place. "... acts on the specific constructor, blasting fools, murdering crowds..."
makes it sound like fireOn actually has side effects.
A truly fair point. Would you like to contribute a monad for blowing up the moon? Another comment:
"As a declarative language, Haskell manipulates expressions, eventually reducing expressions to values."
Huh? In what sense do declarative languages manipulate expressions? Sounds like a classic syntax/semantics confusion, especially when interpreters and/or lazy evaluation (implementation issues, not language properties) are in the mix.
Noted and reflected in the new version. I'm trying to introduce the concept of declarative programming as opposed to imperative programming. Declarative programming according to Wikipediahttp://en.wikipedia.org/wiki/Declarative_programming : is a programming paradigm that expresses the logic of a computation without
describing its control flow.
I believe this is done in Haskell and other declarative languages by treating code as manipulable, reducible expressions, where imperative languages would use machine instructions. --- Now, besides the code and the description of the code, I know we're not English majors, but what do you think of the overall tone, pacing, etc.? Is it entertaining? Is it humorous? Is it boring? Is it offensive? Is it too brief or too laborious? Thanks all for your suggestions.

On 12-09-14 05:18 PM, Andrew Pennebaker wrote:
One thing I want to double check is that Haskell does, in fact, automatically memoize all pure function calls. Is this true?
A simple back-of-envelope calculation that immediately raises doubts: 2 seconds on a 2 GHz computer is 4x10^9 clock cycles, or something between 10^9 to 4x10^9 steps. This sounds more like 2^30 recursive calls to fib, not 30. Even with interpreter inefficiency. A simple experiment to nail the coffin: Experiment #1: fib :: Int -> Int fib 0 = 0 fib 1 = 1 fib n = fib (n - 1) + fib (n - 2) main :: IO () main = mapM_ (print . fib) (replicate 10 30) Null hypothesis: fibs are memoized, therefore the first printing takes 2 seconds, the remaining 9 printings are free. Alternative hypothesis: fibs are not memoized, therefore every printing takes 2 seconds. Observation: ______________________ Which hypothesis to reject: _______ A few advanced experiments to mess with your mind: Experiment #2: fib :: Int -> Int fib 0 = 0 fib 1 = 1 fib n = fib (n - 1) + fib (n - 2) main :: IO () main = let y = fib 30 in mapM_ print (replicate 10 y) Question: Is anything memoized here? Which thing? Experiment #3: fib :: Int -> Int fib 0 = 0 fib 1 = 1 fib n = fib (n - 1) + fib (n - 2) main :: IO () main = mapM_ print (replicate 10 (fib 30)) Question: is this like #1? or is this like #2? Experiment #4: fib :: Int -> Int fib n = mem !! n mem :: [Int] mem = map aux [0..] -- remark: even [Int] is not a very efficient data structure for this aux 0 = 0 aux 1 = 1 aux n = mem!!(n-1) + mem!!(n-2) main :: IO () main = mapM_ (print . fib) (replicate 10 30) Question: How fast is this compared to #1? Why? Final remark: Evil geniuses should use the scientific method, not the opinionative method.

Experiment #4:
fib :: Int -> Int fib n = mem !! n
mem :: [Int] mem = map aux [0..] -- remark: even [Int] is not a very efficient data structure for this
aux 0 = 0 aux 1 = 1 aux n = mem!!(n-1) + mem!!(n-2)
main :: IO () main = mapM_ (print . fib) (replicate 10 30)
Question: How fast is this compared to #1? Why?
Final remark: Evil geniuses should use the scientific method, not the
opinionative method.
Noted and reflected in the new version. Thank you for taking the time to write out a detailed, practical analysis of the question. Yes, I should assume less and test more. I tried these outhttps://github.com/mcandre/mcandre/tree/master/haskell/memfibas requested, and came up with results demonstrating that Haskell, does not, as I mistakenly interpreted, automatically memoize all function calls. Which makes sense, otherwise memory would quickly run out. I found some useful reference code on the Haskell Wiki and constructed my own memoized Fibonacci function using the MemoTrie library, which, fortunately, is builtin with Haskell Platform and therefore does not require the tutorial reader to install additional code. The new version of the tutorial includes an ordinary recursive Fibonacci function (fib1.hs), and the same code but renamed to fib', memoized using the memo function from the MemoTrie library (fib2.hs), and exposed as fib. Unix time information provides real numbers for comparison: The memoized version is clearly much faster. I rewrote the section, deleting the part that stated memoization was automatic, and added text describing how Haskell makes memoization easy.

I found some useful reference code on the Haskell Wiki and constructed my own memoized Fibonacci function using the MemoTrie library, which, fortunately, is builtin with Haskell Platform and therefore does not require the tutorial reader to install additional code.
The new version of the tutorial includes an ordinary recursive Fibonacci function (fib1.hs), and the same code but renamed to fib', memoized using the memo function from the MemoTrie library (fib2.hs), and exposed as fib. Unix time information provides real numbers for comparison: The memoized version is clearly much faster.
I rewrote the section, deleting the part that stated memoization was automatic, and added text describing how Haskell makes memoization easy.
Which version of the Haskell Platform are you using? I tried running your memoized Fibonacci function that uses the MemoTrie library on Windows XP with Service Pack 3, and received the following error message:
$>./runhaskell fib2
fib2.hs:3:7: Could not find module `Data.Memotrie': Use -v to see a list of the files searched for.
My version is as follows:
$>./ghci -v GHCi, version 6.12.3: http://www.haskell.org/ghc/ :? for help Glasgow Haskell Compiler, Version 6.12.3, for Haskell 98, stage 2 booted by GHC version 6.10.4 Using binary package database: C:\PROGRA~1\HASKEL~1\201020~1.0\lib\package.conf. d\package.cache hiding package base-3.0.3.2 to avoid conflict with later version base-4.2.0.2 wired-in package ghc-prim mapped to ghc-prim-0.2.0.0-2feb0cb38f65a4827135ada88c3 4f3ef wired-in package integer-gmp mapped to integer-gmp-0.2.0.1-72436e28c79d056c87cc0 d2d2f9f3773 wired-in package base mapped to base-4.2.0.2-0d1804f62045e52b2e806996d84f5318 wired-in package rts mapped to builtin_rts wired-in package haskell98 mapped to haskell98-1.0.1.1-b5196101fd7a8c42a8d53bd80 33d6765 wired-in package template-haskell mapped to template-haskell-2.4.0.1-401621dedd4 a5f07bfd8630247358bf5 wired-in package dph-seq mapped to dph-seq-0.4.0-be069f0bb710922a6ddd4ed2b91e3a6 c wired-in package dph-par mapped to dph-par-0.4.0-b31a0ce10b7c92126978fcc929077ad 6 Hsc static flags: -static Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package ffi-1.0 ... linking ... done.
For reference, my code is as follows:
#!"c:/Program Files/Haskell Platform/2010.2.0.0/bin/" runhaskell
import Data.Memotrie (memo)
fib :: Int -> Int fib = memo fib' where fib' :: Int -> Int fib' 0 = 0 fib' 1 = 1 fib' n = fib' (n - 1) + fib' (n - 2)
main :: IO () main = do putStrLn (show (fib 30))
Thank you. -- Benjamin L. Russell -- Benjamin L. Russell / DekuDekuplex at Yahoo dot com http://dekudekuplex.wordpress.com/ Translator/Interpreter / Mobile: +011 81 90-6526-1406 "Furuike ya, kawazu tobikomu mizu no oto." -- Matsuo Basho^

On Fri, Sep 14, 2012 at 05:18:24PM -0400, Andrew Pennebaker wrote:
A summary of the changes I've included so far:
Under Declarative, you aren't creating a "named expression, 2 + 2", really. You are redefining (+). Noted and reflected in the new version.
It may not be obvious to readers when you are putting (+) or fib in the let clause, that you are shadowing the function in the outer scope, so for example, a reader might be surprised at the results of trying let 2 + 2 = 5 in 1 + 1 (or similar experiments with your "memoized" fib) It's not clear whether you believe you are only overriding behavior for the specific case of 2 + 2, or if you realize that you've completely redefined (+).
Noted and reflected in the new version. After several comments to this effect, I do not want to misrepresent memoization in the tutorial. Sometimes it is useful to be slightly inaccurate in a tutorial in order to help bridge the gap between someone with no experience in a something vs the wealth of knowledge and complexity in the thing itself. This is not one of those times, and fortunately, fixing the misrepresentation in my tutorial is as easy as removing the hardcoded call.
One thing I want to double check is that Haskell does, in fact, automatically memoize all pure function calls. Is this true?
Generally you store the memoized results in some structure list a list or array defined at the top level. You don't get "automatic" memoization, nor would it be desirable since it would quickly fill memory with memoized results of every function call, unless it could intelligently gauge which ones to keep around. Check out the memoization section of the Haskell wiki for more info.
I would still like to provide a performance comparison of the Fibonacci code with and without memoization, for readers who enjoy numerical evidence, using the Unix "time" command, but I don't know how to turn memoization off. I guess I would have to reimplement the algorithm in a way that can't make use of memoization. Any suggestions?
See above. Off is default -- you need to add an array to get the memoization. The array can be initialized with the lazy list of fibs, which in turn rely upon the array itself to retrieve memoized values. Downside to this approach is you need to define your array bounds upfront. Alternately, you could memoize more simply (and w/o bounds) by putting the values into a top-level list, however the values get more expensive to access, the deeper they are in the list.
calling head. Thank you! I try to make use of Haskell pattern matching wherever I can. Since I needed to refer to the whole list, as well as the head and tail, I originally used "head" instead of pattern matching. Noted and reflected in
Under Infinite, you should use "sieve (n:ns)" pattern matching instead of the new version.
Are you aware you can use an "as pattern" (e.g., nl@(n:ns)) so you can refer to the whole list as nl when you like, but still have destructured as well?
Under Type-Safe Subtle distinction, but returning () is not the same as returning nothing at all. True. Noted and reflected in the new version. What's the Haskell name for () again? I fear explaining the exact type information of IO () may be too much for a brief introduction to Haskell to cover.
() is called unit.
You've got an unusual indentation scheme. Consider adopting a more standard one for your tutorial. I'm in the camp of hard tabs rather than soft tabs, and that preference is probably responsible for much of the difference in indentation scheme. Unfortunately, HTML is terrible at representing hard tabs in <PRE> code with a custom width preference; they all come out looking like some idiot pressed space bar eight times. I could use JavaScript to remedy this, but for now, I like to directly copy and paste my working code from .HS files into <PRE> tags just in case.
If tabs are *not* the issue, then maybe I'm not indenting far enough to the right for some tastes? Or maybe it's my tendency to put "where" on its own line, something a Perl obfuscater would detest. I dunno. If someone would suggest a more idiomatic indentation scheme for my code so that I know exactly what is different, I can take a look.
Unless the lines get too long, you'd usually see something like: do args <- getArgs fcontents <- readFile (head args) ... putStrLn results By the way, your fib.hs (extended) is still in poor form, showing up to fib 30 hardwired. Generally you would just hardwire your first one or two values to "seed" your computation, like the first two Fibonacci numbers, or first prime, etc., and then any memoization is accomplished as described above, using some array or other structure to hold previously computed values.
Incidentally, when will Nat be available in Haskell?
Don't know, sorry.
The Fibonacci algorithm is designed to work only over natural numbers, and the best way to express this in the type system is with Nat. But this type doesn't seem available out-of-the-box for Haskell users.
It's not unusual for a function to be partially defined this way. You could equally well express the Fibonacci (or any) sequence as an infinite list. You can only use non-negative values for your list lookup, so if you view this as a deficiency, then the ubiquitous list shares the same deficiency.
Noted and reflected... I'm trying to convey to an audience largely composed of Java and C++ fanatics how Haskell records are much better than OOP, how GADTs are more intuitive, robust, ... OOP doesn't even compare! That's what I'm trying to get across in that bit. And it's hard to do this in just a few sentences, especially when the reader isn't even familiar with GADTs in the first place.
I meant to also mention in my previous email, your section on Records doesn't actually deal with what the Haskell community considers records (e.g., algebraic datatype with labelled fields.) You're just using a simple algebraic datatype, so it'd be better not to call it a record, as that's bound to lead to confusion.
Now, besides the code and the description of the code, I know we're not English majors, but what do you think of the overall tone, pacing, etc.? Is it entertaining? Is it humorous? Is it boring? Is it offensive? Is it too brief or too laborious?
Very subjective, but the "evil genius" thing doesn't work for me. I like the "launch missiles" analogy when discussing side-effects, but I don't care for it as the entire basis to wind a tutorial around. It is humorous, not boring. I find the pacing OK, but it's hard for me to put myself into the mindset of somebody encountering the material for the first time. Putting an appropriate quote under headings is a nice touch. Alex

Well, to make it short, I liked it! As suggestions, little things like first class functions and partial application can be easily introduced. For example the line: map (+1) [1..10] contains these concepts and it very short and expressive. On the story side, why not introducing a character? This would help personalize the story. It would be nice if a real story is told also, beside the Haskell story, like a character building something or making a (evil) journey learning Haskell. Cheers, Corentin On Fri, Sep 14, 2012 at 11:18 PM, Andrew Pennebaker < andrew.pennebaker@gmail.com> wrote:
A summary of the changes I've included so far:
Under Declarative, you aren't creating a "named expression, 2 + 2", really. You are redefining (+).
Noted and reflected in the new version.
Under Lazy, your example of binding fib 30 is not a good example of memoization. With memoization you typically call the underlying computation the first time, and memoize it for repeated retrieval later, not hardwire in values at compile time. Here you never ever call the real fib at all. On top of everything else, it'd be too easy to introduce a typo into one of your hardwired constant values.
Noted and reflected in the new version. After several comments to this effect, I do not want to misrepresent memoization in the tutorial. Sometimes it is useful to be slightly inaccurate in a tutorial in order to help bridge the gap between someone with no experience in a something vs the wealth of knowledge and complexity in the thing itself. This is not one of those times, and fortunately, fixing the misrepresentation in my tutorial is as easy as removing the hardcoded call.
One thing I want to double check is that Haskell does, in fact, automatically memoize all pure function calls. Is this true?
I would still like to provide a performance comparison of the Fibonacci code with and without memoization, for readers who enjoy numerical evidence, using the Unix "time" command, but I don't know how to turn memoization off. I guess I would have to reimplement the algorithm in a way that can't make use of memoization. Any suggestions?
Under Infinite, you should use "sieve (n:ns)" pattern matching instead of
calling head.
Thank you! I try to make use of Haskell pattern matching wherever I can. Since I needed to refer to the whole list, as well as the head and tail, I originally used "head" instead of pattern matching. Noted and reflected in the new version.
Under Type-Safe Subtle distinction, but returning () is not the same as returning nothing at all.
True. Noted and reflected in the new version. What's the Haskell name for () again? I fear explaining the exact type information of IO () may be too much for a brief introduction to Haskell to cover.
s/ommitted/omitted/
Noted and reflected in the new version.
You've got an unusual indentation scheme. Consider adopting a more standard one for your tutorial.
I'm in the camp of hard tabs rather than soft tabs, and that preference is probably responsible for much of the difference in indentation scheme. Unfortunately, HTML is terrible at representing hard tabs in <PRE> code with a custom width preference; they all come out looking like some idiot pressed space bar eight times. I could use JavaScript to remedy this, but for now, I like to directly copy and paste my working code from .HS files into <PRE> tags just in case.
If tabs are *not* the issue, then maybe I'm not indenting far enough to the right for some tastes? Or maybe it's my tendency to put "where" on its own line, something a Perl obfuscater would detest. I dunno. If someone would suggest a more idiomatic indentation scheme for my code so that I know exactly what is different, I can take a look.
In general, monotonically decreasing is not sufficient to prove you will hit a base case. For example, decreasing by 5 would still be monotonically decreasing, and could jump right over your base cases. (Not arguing that the code is incorrect, but rather than your explanation is a little misleading/incomplete.)
Understood. Noted and reflected in the new version.
Incidentally, when will Nat be available in Haskell? The Fibonacci algorithm is designed to work only over natural numbers, and the best way to express this in the type system is with Nat. But this type doesn't seem available out-of-the-box for Haskell users. Unless I'm using my Haskell Platform (GHC 7.0.3) is slightly outdated. Eh?
Again, further confusion about what memoization really is.
Under Records
"Functions are already defined by their data structures; they are already semantically bundled..." doesn't seem to make sense.
Noted and reflected... I'm trying to convey to an audience largely composed of Java and C++ fanatics how Haskell records are much better than OOP, how GADTs are more intuitive, robust, ... OOP doesn't even compare! That's what I'm trying to get across in that bit. And it's hard to do this in just a few sentences, especially when the reader isn't even familiar with GADTs in the first place.
"... acts on the specific constructor, blasting fools, murdering crowds..."
makes it sound like fireOn actually has side effects.
A truly fair point. Would you like to contribute a monad for blowing up the moon?
Another comment:
"As a declarative language, Haskell manipulates expressions, eventually reducing expressions to values."
Huh? In what sense do declarative languages manipulate expressions? Sounds like a classic syntax/semantics confusion, especially when interpreters and/or lazy evaluation (implementation issues, not language properties) are in the mix.
Noted and reflected in the new version.
I'm trying to introduce the concept of declarative programming as opposed to imperative programming. Declarative programming according to Wikipediahttp://en.wikipedia.org/wiki/Declarative_programming :
is a programming paradigm that expresses the logic of a computation
without describing its control flow.
I believe this is done in Haskell and other declarative languages by treating code as manipulable, reducible expressions, where imperative languages would use machine instructions.
---
Now, besides the code and the description of the code, I know we're not English majors, but what do you think of the overall tone, pacing, etc.? Is it entertaining? Is it humorous? Is it boring? Is it offensive? Is it too brief or too laborious?
Thanks all for your suggestions.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Fri, Sep 14, 2012 at 2:18 PM, Andrew Pennebaker < andrew.pennebaker@gmail.com> wrote:
A summary of the changes I've included so far:
Noted and reflected... I'm trying to convey to an audience largely composed of Java and C++ fanatics how Haskell records are much better than OOP, how GADTs are more intuitive, robust, ... OOP doesn't even compare! That's what I'm trying to get across in that bit. And it's hard to do this in just a few sentences, especially when the reader isn't even familiar with GADTs in the first place.
But, Haskell records aren't better than OOP. I am not trying to be controversial here -- Haskell records would "naturally" implement prototype-based OOP, like JavaScript uses, if they (Haskell records) weren't so useless. This is basically why "lenses" were designed (i.e., to make consistent syntax for getters and setters)

But, Haskell records aren't better than OOP.
I am not trying to be controversial here -- Haskell records would "naturally" implement prototype-based OOP, like JavaScript uses, if they (Haskell records) weren't so useless. This is basically why "lenses" were designed (i.e., to make consistent syntax for getters and setters)
Hey, I don't want the tutorial to be controversial either, especially since my word choice of words like "better" are highly subjective. I find it extraordinarily empowering that Haskell's type system allows programmers to use a wide variety of programming paradigms: functional, declarative, imperative, lazy, eager, parallel, quantum, matrix, CUDA, multithreaded, distributed, logical, object oriented... When I describe Haskell's type system as better than OOP, what I mean is this: You can code OOP in Haskell, because the type system can adapt to that. But it's much harder to go the other way, trying to code GADTs in Java/C++. In order to get the effect, you'll have to code something as complex as Scala, at which point you might as well just use Haskell (unless you really really really need the JVM for compatibility). It's the same with Lisp or JavaScript or Smalltalk or Ruby: Allowing the programmer to roll his own paradigm, such as OOP, is more powerful than offering only that paradigm. More to the point, the monad system enables all of this, and I'm not sure how to reword this tutorial to reflect that; monads themselves are generally treated as an advanced lesson, and this one tries to hit the ground running. Does anyone know of a brief introductory Haskell tutorial that engages monads? LYAH covers monads, but it does so after a few chapters of simpler, pure function Haskell coding. I know of some brief tutorials for monads that explain them in a variety of creative ways, but they all assume the reader is at least somewhat familiar with Haskell.

On Fri, Sep 14, 2012 at 8:23 PM, Andrew Pennebaker
[snip..] Does anyone know of a brief introductory Haskell tutorial that engages monads? LYAH covers monads, but it does so after a few chapters of simpler, pure function Haskell coding. I know of some brief tutorials for monads that explain them in a variety of creative ways, but they all assume the reader is at least somewhat familiar with Haskell.
My opinion: you are walking down the potentially disastrous road of trying to introduce monads in as small a tutorial as yours. If the person reading the tutorial is *indeed* an evil genius, I'd at least start by showing an example of a monad that wasn't a disingenuous presentation of their full capabilities (I'm taking a long cold stare at the people who write tutorials that say "hey, Maybe is a monad, and Maybe is simple, so monads a pretty simple too!"), like the Cont monad. But, you say, "hell, Cont has quite a complex set of things going on!" And I say, yes, it's sufficiently complicated to force people to work out the underlying details. (As a side note, I've had an itching feeling to introduce monads to students via having them write an interpreter and presenting it by proxy of state based semantics in something of the spirit of Imp.) Monads are a complex subject, not overly complex, I would say "they let you chain things together and hide behavior under the rug," but even this isn't helpful to someone to hasn't seen them before. So my argument is that if you introduce monads, you do so with a great example that demonstrates a nontrivial monad (State or Cont, *not* Maybe or IO). This is a case where forcing people to work out the math is imperative (hehe, get it, it's a pun..), programmers' natural tendency when they see IO examples is to say "oh hey, I'll just copy that pattern, and that's how you do IO," where in reality this doesn't showcase everything monads really do. If you want to *link* to some good tutorials, I think that reading the "All about monads" tutorial is a great way to learn Haskell, period. Everyone in the Haskell cafe probably has a secret dream to give the best "five minute monad talk." Challenge: get someone to have a competition at one of the conferences where students all give their best "five minute monad talk" and try to find the most comprehensible one! Perhaps that was a bit of a rant, apologies if so, kris

Everyone in the Haskell cafe probably has a secret dream to give the best "five minute monad talk." Challenge: get someone to have a competition at one of the conferences where students all give their best "five minute monad talk" and try to find the most comprehensible one!
Haha, maybe that's why I'm writing. Agree on all points, not just this quotation. Yeah, IO and Maybe are the first monads most new Haskell programmers encounter. Perhaps a tour of RVars or the accelerate library would give a better impression. I bet a lot of students get the concept of pure functional programming, and if you shock them with: "So how would you implement a PRNG?", they would understand the role monads play. Given that Maybe and Either don't modify state, nor do they communicate with outside interfaces, nor do they specify computation ordering, I don't understand why they're implemented as monads. Why not a primitive typeclass or even datatype declaration?

On Fri, Sep 14, 2012 at 9:08 PM, Andrew Pennebaker < andrew.pennebaker@gmail.com> wrote:
Given that Maybe and Either don't modify state, nor do they communicate
with outside interfaces, nor do they specify computation ordering, I don't understand why they're implemented as monads. Why not a primitive typeclass or even datatype declaration?
They're not used in their monadic guise as often as they should be, IMO. Either String has for a while been an "error monad" (more commonly, EitherT String as ErrorT) but has annoying shortcomings --- but they're an obvious mechanism for reporting and tracking / short-circuiting failure in computations (carrying a failure reason in the case of Either). -- brandon s allbery allbery.b@gmail.com wandering unix systems administrator (available) (412) 475-9364 vm/sms

I find the Monad instance for Maybe and Either very useful. You can do things like the following (which technically only uses the Applicative instance): Prelude Control.Applicative> (*3) <$> (+2) <$> Just 1 Just 9 Prelude Control.Applicative> (*3) <$> (+2) <$> Nothing Nothing Prelude Control.Applicative> (*3) <$> (+2) <$> Left "error" :: Either String Int Left "error" Prelude Control.Applicative> (*3) <$> (+2) <$> Right 1 :: Either String Int Right 9 Also, Maybe and Either are not "implemented as monads". They are defined using `data` like you suggest: data Maybe a = Nothing | Just a data Either a b = Left a | Right b You can even look up their definitions using ghci using `:i Either` or `:i Maybe`. Monad comes in because they're both instances of the Monad type class. Take a look at the list monad for another one that doesn't modify state or communicate with an outside interface. - Joel On Friday, September 14, 2012 at 6:08 PM, Andrew Pennebaker wrote:
Everyone in the Haskell cafe probably has a secret dream to give the best "five minute monad talk." Challenge: get someone to have a competition at one of the conferences where students all give their best "five minute monad talk" and try to find the most comprehensible one!
Haha, maybe that's why I'm writing.
Agree on all points, not just this quotation.
Yeah, IO and Maybe are the first monads most new Haskell programmers encounter. Perhaps a tour of RVars or the accelerate library would give a better impression. I bet a lot of students get the concept of pure functional programming, and if you shock them with: "So how would you implement a PRNG?", they would understand the role monads play.
Given that Maybe and Either don't modify state, nor do they communicate with outside interfaces, nor do they specify computation ordering, I don't understand why they're implemented as monads. Why not a primitive typeclass or even datatype declaration? _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org (mailto:Haskell-Cafe@haskell.org) http://www.haskell.org/mailman/listinfo/haskell-cafe

Joel Burget, Fri 2012-09-14 @ 19:08:29-0700:
I find the Monad instance for Maybe and Either very useful. You can do things like the following (which technically only uses the Applicative instance):
Prelude Control.Applicative> (*3) <$> (+2) <$> Just 1 Just 9 Prelude Control.Applicative> (*3) <$> (+2) <$> Nothing Nothing Prelude Control.Applicative> (*3) <$> (+2) <$> Left "error" :: Either String Int Left "error" Prelude Control.Applicative> (*3) <$> (+2) <$> Right 1 :: Either String Int Right 9
Nitpick: You are using the Functor instances of Maybe and Either here, not Applicative. (<$>) == fmap; it just happens to be defined in the Control.Applicative module.

On Fri, Sep 14, 2012 at 10:08 PM, Joel Burget
[snip]
Also, Maybe and Either are not "implemented as monads". They are defined using `data` like you suggest:
data Maybe a = Nothing | Just a data Either a b = Left a | Right b
That's not my point, or my objection. My objection is to people who present monads showing examples that begin with Maybe or Either, or these 'trivial monads,' types onto which you can strip monadic behavior fairly simply. I'm not saying they're bad as monads, or useless, but I think the step from Maybe as a datatype to using it as a monad is great enough that explaining monads by way of introducing them with Maybe as an example is sort of confusing because it trivializes what's actually going on. I'm honestly not sure what you mean by Maybe or Either being "implemented as monads," versus others. Monad is just a type class, there's always an underlying type. Perhaps you mean that people actually *care* about things in Maybe outside of it being used as a monad, versus other things where you don't touch the underlying type. This isn't intended to start an argument, however, and I'd prefer not to argue over methodology, I just wanted to throw out there that if you say "monads, think about maybe, and add some stuff, then that's it is, what's all the fuss about!?" I think the hard part for people understanding monads isn't the definition of monads, but rather that you are forced to really tackle higher order behavior in a very direct way. (For example, say you're new to Haskell, you probably don't know about CPS, and you read about Cont in a monad tutorial. Is it the monad that makes it hard? No, it's probably the concept of CPS.) kris

Kris, Sorry for the confusion, I wasn't directly addressing your post. I was trying to correct what I perceived as a misconception in the last paragraph of Andrew's message, beginning with "Given that…". - Joel On Saturday, September 15, 2012 at 10:24 AM, Kristopher Micinski wrote:
On Fri, Sep 14, 2012 at 10:08 PM, Joel Burget
wrote: [snip]
Also, Maybe and Either are not "implemented as monads". They are defined using `data` like you suggest:
data Maybe a = Nothing | Just a data Either a b = Left a | Right b
That's not my point, or my objection. My objection is to people who present monads showing examples that begin with Maybe or Either, or these 'trivial monads,' types onto which you can strip monadic behavior fairly simply. I'm not saying they're bad as monads, or useless, but I think the step from Maybe as a datatype to using it as a monad is great enough that explaining monads by way of introducing them with Maybe as an example is sort of confusing because it trivializes what's actually going on.
I'm honestly not sure what you mean by Maybe or Either being "implemented as monads," versus others. Monad is just a type class, there's always an underlying type. Perhaps you mean that people actually *care* about things in Maybe outside of it being used as a monad, versus other things where you don't touch the underlying type.
This isn't intended to start an argument, however, and I'd prefer not to argue over methodology, I just wanted to throw out there that if you say "monads, think about maybe, and add some stuff, then that's it is, what's all the fuss about!?" I think the hard part for people understanding monads isn't the definition of monads, but rather that you are forced to really tackle higher order behavior in a very direct way. (For example, say you're new to Haskell, you probably don't know about CPS, and you read about Cont in a monad tutorial. Is it the monad that makes it hard? No, it's probably the concept of CPS.)
kris

Challenge: get someone to have a competition at one of the conferences where students all give their best "five minute monad talk" and try to find the most comprehensible one!
Challenge accepted http://www.yellosoft.us/5min/.

On Fri, Sep 14, 2012 at 10:13 PM, Andrew Pennebaker
Challenge: get someone to have a competition at one of the conferences where students all give their best "five minute monad talk" and try to find the most comprehensible one!
Challenge accepted.
Great! Maybe I'll try to write up a post on this challenge and put it on my blog, and try to solicit responses from others too, perhaps putting up a wiki page somewhere where people can post their attempts! (By the way, I consider this hard, I'm not touting myself as having a good solution for this, I'm genuinely interested in others' responses!) kris

On Fri, Sep 14, 2012 at 7:13 PM, Andrew Pennebaker < andrew.pennebaker@gmail.com> wrote:
Challenge: get someone to have a competition at one of the conferences
where students all give their best "five minute monad talk" and try to find the most comprehensible one!
Challenge accepted http://www.yellosoft.us/5min/.
What did any of that have to do with monads?

Hi, Kristopher Micinski wrote:
Everyone in the Haskell cafe probably has a secret dream to give the best "five minute monad talk."
(1) Most programming languages support side effects. There are different kinds of side effects such as accessing mutable variables, reading files, running in parallel, raising exceptions, nondeterministically returning more than one answer, and many more. Most languages have some of these effects built into their semantics, and do not support the others at all. (2) Haskell is pure, so it doesn't support any side effects. Instead, when Haskell programmers want to perform a side effect, they explicitly construct a description of the side effecting computation as a value. For every group of related side effects, there is a Haskell type that describes computations that can have that group of side effects. (3) Some of these types are built in, such as IO for accessing the world outside the processor and ST for accessing local mutable variables. Other such types are defined in Haskell libraries, such as for computations that can fail and for computations that can return multiple answers. Application programmers often define their own types for the side effects they need to describe, tailoring the language to their needs. (4) All computation types have a common interface for operations that are independent of the exact side effects performed. Some functions work with arbitrary computations, just using this interface. For example, we can compose a computation with itself in order to run it twice. Such generic operations are highly reusable. (5) The common interface for constructing computations is called "Monad". It is inspired by the mathematical theory that some computer scientists use when they describe what exactly the semantics of a programming language with side effects is. So most other languages support some monad natively without the programmer ever noticing, whereas Haskell programmers can choose (and even implement) exactly the monads they want. This makes Haskell a very good language for side effecting computation. Tillmann

Hi Tillmann. Wow. Lovely and spot on! And I almost never hear monad explanations without wincing. Thanks for sharing. -- Conal On Sun, Sep 16, 2012 at 7:48 AM, Tillmann Rendel < rendel@informatik.uni-marburg.de> wrote:
Hi,
Kristopher Micinski wrote:
Everyone in the Haskell cafe probably has a secret dream to give the best "five minute monad talk."
(1) Most programming languages support side effects. There are different kinds of side effects such as accessing mutable variables, reading files, running in parallel, raising exceptions, nondeterministically returning more than one answer, and many more. Most languages have some of these effects built into their semantics, and do not support the others at all.
(2) Haskell is pure, so it doesn't support any side effects. Instead, when Haskell programmers want to perform a side effect, they explicitly construct a description of the side effecting computation as a value. For every group of related side effects, there is a Haskell type that describes computations that can have that group of side effects.
(3) Some of these types are built in, such as IO for accessing the world outside the processor and ST for accessing local mutable variables. Other such types are defined in Haskell libraries, such as for computations that can fail and for computations that can return multiple answers. Application programmers often define their own types for the side effects they need to describe, tailoring the language to their needs.
(4) All computation types have a common interface for operations that are independent of the exact side effects performed. Some functions work with arbitrary computations, just using this interface. For example, we can compose a computation with itself in order to run it twice. Such generic operations are highly reusable.
(5) The common interface for constructing computations is called "Monad". It is inspired by the mathematical theory that some computer scientists use when they describe what exactly the semantics of a programming language with side effects is. So most other languages support some monad natively without the programmer ever noticing, whereas Haskell programmers can choose (and even implement) exactly the monads they want. This makes Haskell a very good language for side effecting computation.
Tillmann
______________________________**_________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

Agreed. Great. I still contend that it would be cool to get this to
be a real thing at something like the Haskell workshop, I think
hearing the different perspectives would be an interesting insight
into the many different ways to explain monads. But I suppose the way
to start would be to put up a webpage for collecting them..
kris
On Sun, Sep 16, 2012 at 3:55 PM, Conal Elliott
Hi Tillmann. Wow. Lovely and spot on! And I almost never hear monad explanations without wincing. Thanks for sharing. -- Conal
On Sun, Sep 16, 2012 at 7:48 AM, Tillmann Rendel
wrote: Hi,
Kristopher Micinski wrote:
Everyone in the Haskell cafe probably has a secret dream to give the best "five minute monad talk."
(1) Most programming languages support side effects. There are different kinds of side effects such as accessing mutable variables, reading files, running in parallel, raising exceptions, nondeterministically returning more than one answer, and many more. Most languages have some of these effects built into their semantics, and do not support the others at all.
(2) Haskell is pure, so it doesn't support any side effects. Instead, when Haskell programmers want to perform a side effect, they explicitly construct a description of the side effecting computation as a value. For every group of related side effects, there is a Haskell type that describes computations that can have that group of side effects.
(3) Some of these types are built in, such as IO for accessing the world outside the processor and ST for accessing local mutable variables. Other such types are defined in Haskell libraries, such as for computations that can fail and for computations that can return multiple answers. Application programmers often define their own types for the side effects they need to describe, tailoring the language to their needs.
(4) All computation types have a common interface for operations that are independent of the exact side effects performed. Some functions work with arbitrary computations, just using this interface. For example, we can compose a computation with itself in order to run it twice. Such generic operations are highly reusable.
(5) The common interface for constructing computations is called "Monad". It is inspired by the mathematical theory that some computer scientists use when they describe what exactly the semantics of a programming language with side effects is. So most other languages support some monad natively without the programmer ever noticing, whereas Haskell programmers can choose (and even implement) exactly the monads they want. This makes Haskell a very good language for side effecting computation.
Tillmann
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Agreed. Great. I still contend that it would be cool to get this to
be a real thing at something like the Haskell workshop, I think
hearing the different perspectives would be an interesting insight
into the many different ways to explain monads. But I suppose the way
to start would be to put up a webpage for collecting them..
kris
On Sun, Sep 16, 2012 at 3:55 PM, Conal Elliott
Hi Tillmann. Wow. Lovely and spot on! And I almost never hear monad explanations without wincing. Thanks for sharing. -- Conal
On Sun, Sep 16, 2012 at 7:48 AM, Tillmann Rendel
wrote: Hi,
Kristopher Micinski wrote:
Everyone in the Haskell cafe probably has a secret dream to give the best "five minute monad talk."
(1) Most programming languages support side effects. There are different kinds of side effects such as accessing mutable variables, reading files, running in parallel, raising exceptions, nondeterministically returning more than one answer, and many more. Most languages have some of these effects built into their semantics, and do not support the others at all.
(2) Haskell is pure, so it doesn't support any side effects. Instead, when Haskell programmers want to perform a side effect, they explicitly construct a description of the side effecting computation as a value. For every group of related side effects, there is a Haskell type that describes computations that can have that group of side effects.
(3) Some of these types are built in, such as IO for accessing the world outside the processor and ST for accessing local mutable variables. Other such types are defined in Haskell libraries, such as for computations that can fail and for computations that can return multiple answers. Application programmers often define their own types for the side effects they need to describe, tailoring the language to their needs.
(4) All computation types have a common interface for operations that are independent of the exact side effects performed. Some functions work with arbitrary computations, just using this interface. For example, we can compose a computation with itself in order to run it twice. Such generic operations are highly reusable.
(5) The common interface for constructing computations is called "Monad". It is inspired by the mathematical theory that some computer scientists use when they describe what exactly the semantics of a programming language with side effects is. So most other languages support some monad natively without the programmer ever noticing, whereas Haskell programmers can choose (and even implement) exactly the monads they want. This makes Haskell a very good language for side effecting computation.
Tillmann
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Mind if I join you in praising this?
On Sep 17, 2012, at 12:06 AM, Kristopher Micinski
Agreed. Great. I still contend that it would be cool to get this to be a real thing at something like the Haskell workshop, I think hearing the different perspectives would be an interesting insight into the many different ways to explain monads. But I suppose the way to start would be to put up a webpage for collecting them..
kris
On Sun, Sep 16, 2012 at 3:55 PM, Conal Elliott
wrote: Hi Tillmann. Wow. Lovely and spot on! And I almost never hear monad explanations without wincing. Thanks for sharing. -- Conal
On Sun, Sep 16, 2012 at 7:48 AM, Tillmann Rendel
wrote: Hi,
Kristopher Micinski wrote:
Everyone in the Haskell cafe probably has a secret dream to give the best "five minute monad talk."
(1) Most programming languages support side effects. There are different kinds of side effects such as accessing mutable variables, reading files, running in parallel, raising exceptions, nondeterministically returning more than one answer, and many more. Most languages have some of these effects built into their semantics, and do not support the others at all.
(2) Haskell is pure, so it doesn't support any side effects. Instead, when Haskell programmers want to perform a side effect, they explicitly construct a description of the side effecting computation as a value. For every group of related side effects, there is a Haskell type that describes computations that can have that group of side effects.
(3) Some of these types are built in, such as IO for accessing the world outside the processor and ST for accessing local mutable variables. Other such types are defined in Haskell libraries, such as for computations that can fail and for computations that can return multiple answers. Application programmers often define their own types for the side effects they need to describe, tailoring the language to their needs.
(4) All computation types have a common interface for operations that are independent of the exact side effects performed. Some functions work with arbitrary computations, just using this interface. For example, we can compose a computation with itself in order to run it twice. Such generic operations are highly reusable.
(5) The common interface for constructing computations is called "Monad". It is inspired by the mathematical theory that some computer scientists use when they describe what exactly the semantics of a programming language with side effects is. So most other languages support some monad natively without the programmer ever noticing, whereas Haskell programmers can choose (and even implement) exactly the monads they want. This makes Haskell a very good language for side effecting computation.
Tillmann
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sun, Sep 16, 2012 at 8:18 PM, Tillmann Rendel < rendel@informatik.uni-marburg.de> wrote:
Hi,
Kristopher Micinski wrote:
Everyone in the Haskell cafe probably has a secret dream to give the best "five minute monad talk."
(1) Most programming languages support side effects. There are different kinds of side effects such as accessing mutable variables, reading files, running in parallel, raising exceptions, nondeterministically returning more than one answer, and many more. Most languages have some of these effects built into their semantics, and do not support the others at all.
(2) Haskell is pure, so it doesn't support any side effects. Instead, when Haskell programmers want to perform a side effect, they explicitly construct a description of the side effecting computation as a value. For every group of related side effects, there is a Haskell type that describes computations that can have that group of side effects.
(3) Some of these types are built in, such as IO for accessing the world outside the processor and ST for accessing local mutable variables. Other such types are defined in Haskell libraries, such as for computations that can fail and for computations that can return multiple answers. Application programmers often define their own types for the side effects they need to describe, tailoring the language to their needs.
(4) All computation types have a common interface for operations that are independent of the exact side effects performed. Some functions work with arbitrary computations, just using this interface. For example, we can compose a computation with itself in order to run it twice. Such generic operations are highly reusable.
(5) The common interface for constructing computations is called "Monad". It is inspired by the mathematical theory that some computer scientists use when they describe what exactly the semantics of a programming language with side effects is. So most other languages support some monad natively without the programmer ever noticing, whereas Haskell programmers can choose (and even implement) exactly the monads they want. This makes Haskell a very good language for side effecting computation.
Tillmann http://blog.languager.org
Very nice! Except that perhaps I'd rewrite the last line as: ...makes Haskell a good language for playing with alternative paradigms of side-effecting computation though mine is not as pithy as yours! Rusi http://blog.languager.org

I'm afraid this kind of 5-minute talk makes sense only if you already know a lot about monads or are a computer scientist; not if you're a programmer who wants to learn a new language. For instance, this statement starts making sense only if you've seen a lot of examples of monads (maybe even read Moggi and Wadler) and want to understand the big picture. "... when Haskell programmers want to perform a side effect, they explicitly construct a description of the side effecting computation as a value." And even then, what does it mean to "construct a description of the side effecting computation as a value" for the Maybe monad? An IO action or a State Monad action indeed are values that describe computations, but what computation does (Just 1) describe? It's the simple monads that are tricky to explain (I've seen a discussion to that effect in this forum and I wholeheartedly agree). --Bartosz On Sunday, September 16, 2012 7:49:51 AM UTC-7, Tillmann Rendel wrote:
Hi,
Kristopher Micinski wrote:
Everyone in the Haskell cafe probably has a secret dream to give the best "five minute monad talk."
(1) Most programming languages support side effects. There are different kinds of side effects such as accessing mutable variables, reading files, running in parallel, raising exceptions, nondeterministically returning more than one answer, and many more. Most languages have some of these effects built into their semantics, and do not support the others at all.
(2) Haskell is pure, so it doesn't support any side effects. Instead, when Haskell programmers want to perform a side effect, they explicitly construct a description of the side effecting computation as a value. For every group of related side effects, there is a Haskell type that describes computations that can have that group of side effects.
(3) Some of these types are built in, such as IO for accessing the world outside the processor and ST for accessing local mutable variables. Other such types are defined in Haskell libraries, such as for computations that can fail and for computations that can return multiple answers. Application programmers often define their own types for the side effects they need to describe, tailoring the language to their needs.
(4) All computation types have a common interface for operations that are independent of the exact side effects performed. Some functions work with arbitrary computations, just using this interface. For example, we can compose a computation with itself in order to run it twice. Such generic operations are highly reusable.
(5) The common interface for constructing computations is called "Monad". It is inspired by the mathematical theory that some computer scientists use when they describe what exactly the semantics of a programming language with side effects is. So most other languages support some monad natively without the programmer ever noticing, whereas Haskell programmers can choose (and even implement) exactly the monads they want. This makes Haskell a very good language for side effecting computation.
Tillmann
_______________________________________________ Haskell-Cafe mailing list Haskel...@haskell.org javascript: http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sun, Oct 14, 2012 at 1:37 PM, Bartosz Milewski
I'm afraid this kind of 5-minute talk makes sense only if you already know a lot about monads or are a computer scientist; not if you're a programmer who wants to learn a new language. For instance, this statement starts making sense only if you've seen a lot of examples of monads (maybe even read Moggi and Wadler) and want to understand the big picture.
fwiw I never intended it to be a discussion to people who want to learn Haskell, my target audience is people who know Haskell basics, some amount of computer science and math background (typical of ML converts, for example).
"... when Haskell programmers want to perform a side effect, they explicitly construct a description of the side effecting computation as a value."
I really disagree that reading Moggi's paper is a prerequisite to understanding that statement, I would have understood that (although without example) long before having read Moggi's paper, and I still don't understand all of it!
And even then, what does it mean to "construct a description of the side effecting computation as a value" for the Maybe monad? An IO action or a State Monad action indeed are values that describe computations, but what computation does (Just 1) describe? It's the simple monads that are tricky to explain (I've seen a discussion to that effect in this forum and I wholeheartedly agree).
Agreed, I think the problem here is using Maybe as an example. The problem is that monads (as all things in category theory) are these very general things, to explain them in their full generality is fundamentally at odds with most peoples' (certainly *my*) understanding of learning individual examples and generalizing. While monads are typically used to structure computation in this way (building a description of a side effect as a value which is elided in the bind) they are in reality simply just some monoids in the right category, but the Maybe description doesn't jibe as well with the previous explanation, which motivates the need for monads more. (The fact you can do this with Maybe is certainly helpful, but people don't see having to write chains of destructors as as large a barrier to using Haskell as not being able to use IO, for example..) kris

On Fri, Sep 14, 2012 at 2:18 PM, Andrew Pennebaker wrote:
A summary of the changes I've included so far: [...]
Another comment:
"As a declarative language, Haskell manipulates expressions, eventually reducing expressions to values."
Huh? In what sense do declarative languages manipulate expressions? Sounds like a classic syntax/semantics confusion, especially when interpreters and/or lazy evaluation (implementation issues, not language properties) are in the mix.
Noted and reflected in the new version.
I'm trying to introduce the concept of declarative programming as opposed to imperative programming. Declarative programming according to Wikipediahttp://en.wikipedia.org/wiki/Declarative_programming :
is a programming paradigm that expresses the logic of a computation
without describing its control flow.
I believe this is done in Haskell and other declarative languages by treating code as manipulable, reducible expressions, where imperative languages would use machine instructions.
I'm struggling to find anything in this belief/opinion that I can relate to. How did you come by it? What experiments can you perform to check whether it's true or false? I second Albert Lai's recommendation to use the scientific method. Along these lines, I still see "Haskell manipulates reducible expressions, eventually reducing them to values" in your tutorial, which again I suspect comes from confusion between syntax & semantics and/or between meaning and possible execution strategy. Regards, - Conal

Hi Andrew, To save others the search, here's the/a reddit URL: http://www.reddit.com/r/programming/related/nhnyd/nfa_in_a_single_line_of_ha.... The terribly misleading/mistaken remarks on fib & memoization are still in your post. As hammar commented on reddit commenter, you're not memoizing in that example; just shadowing one fib definition with another (very partial) one. (BTW, I highly recommend compiling with -Wall in all cases and then addressing all reported issues. Doing so would have issued a shadowing warning.) Another comment: As a declarative language, Haskell manipulates expressions, eventually
reducing expressions to values.
Huh? In what sense do declarative languages manipulate expressions? Sounds like a classic syntax/semantics confusion, especially when interpreters and/or lazy evaluation (implementation issues, not language properties) are in the mix. Regards, - Conal On Fri, Sep 14, 2012 at 9:13 AM, Andrew Pennebaker < andrew.pennebaker@gmail.com> wrote:
I've gotten mixed feedback from Reddit for my tutorial. It provides an overview of how functional and declarative programming in Haskell empower baddies, increasing the accuracy and efficiency of their atomic superweapons. What do you guys think of my tutorial, Haskell for the Evil Genius http://www.yellosoft.us/evilgenius/?
Cheers,
Andrew Pennebaker www.yellosoft.us
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
participants (16)
-
Albert Y. C. Lai
-
Alex Stangl
-
Alexander Solla
-
Andrew Pennebaker
-
Bartosz Milewski
-
Brandon Allbery
-
Conal Elliott
-
Corentin Dupont
-
DekuDekuplex@Yahoo.com
-
Joel Burget
-
KC
-
Kristopher Micinski
-
MigMit
-
Rustom Mody
-
Taylor Hedberg
-
Tillmann Rendel