Looking for feedback on my beginner's Haskell resource

If you follow /r/haskell you may have seen it posted there as well, but I've written a short guide called Wise Man's Haskell https://andre.tips/wmh/ which I hope people will find useful. However, I'm not extremely knowledgeable about Haskell and I wouldn't say I'm the best teacher, so if anyone is willing to skim it or provide feedback that would be much appreciated!

Am 03.11.18 um 02:46 schrieb André Popovitch:
However, I'm not extremely knowledgeable about Haskell and I wouldn't say I'm the best teacher, so if anyone is willing to skim it or provide feedback that would be much appreciated!
One kind of typo that's common enough to become annoying: comma/fullstop and subsequent space were interchanged. Well-written overall. I'm pretty sure different people will have different ideas about what's important about Haskell, but I think your take is valid. Besides, the knowledgeable people won't know what a newbie will find most interesting or enlightening about Haskell, so you'll have to get feedback from non-Haskellers to judge how successful that site is. Some details aren't quite right (as is to be expected with anything that goes beyond a dozen pages). E.g. mutability does increase the number of variables you have to keep track of, it multiplies the amount of information you have to keep track of for each variable (namely the set of locations where it is changed). Stating that Haskell does not have side effects will cause cognitive dissonance. Technically, Haskell does not have it, but there's that technique to put state into a function that you return, hiding the state not in a transparent data object but in a pretty opaque function object. This is being systematically (ab?)used in many monads, and in practice, it has exactly the same benefit as a mutable global state (you don't have to thread it through every function call, it's globally available), and the same problems (you don't know where it might be changed). And then there's IO, which is a different way to do mutability except by name. (I have never been able to find out what the concept behind IO is. My best guess is that it's a framework to set up descriptions of IO interactions, which tend to be infinite which isn't a problem since Haskell is lazy, but this may well be totally wrong. SPJ seemingly takes this for granted, and all the docs I could find just described the mechanics of using it, often with an implicit assumption that IO is a magical mutability enclave in Haskell, which I'm pretty sure is not actually the case.) I don't know enough to give good advice how to be neither wrong enough to confuse newbies with cognitive dissonance nor correct enough to confuse newbies with the full truth. You should mention that `rem` needs to be typed including backquotes. With some fonts, they might look similar enough to normal quotes, and then be ignored. (That point in the presentation might be a good place for a side remark, explaining how Haskell allows using operators as functions, and how it allows using functions as operators.) A sidebar notice might help to explain that Haskell's function call syntax is nearer to mathematical than programming language conventions: Mathematicians write "sin x", not "sin(x)"; they use parentheses only when precedences get in the way, e.g. they'll write "(sin x) + 1" if needed, or maybe "sin (x + 1)" but the "(x + 1)" isn't function-call syntax, it's precedence-altering syntax. (As conventions go in mathematics, it's just a common one, not a universal one. Mathematicians are horribly sloppy about their conventions. In fact they are sloppy about anything except the topic they're currently interested in. Well, programmers are obsessed about irrelevant detail because compilers force them into that habit, from their perspective, so both sides are right in a sense ;-) Okay, enough for now. Regards, Jo

Conceptualization of IO is difficult. One way to think about it is the
result of (main :: IO a) is a program sent to an impure runtime to execute,
with IO actions being compositions of instructions for the runtime… but
this breaks down as soon as you discover unsafePerformIO.
The closest that you'll get to the reality for GHC is that it pretty much
is a haven for impurity: that it forces all impure functions to declare
that in their types. (Not necessarily for mutability as such; ST gives you
that without impurity.)
On Sat, Nov 3, 2018 at 4:23 AM Joachim Durchholz
Am 03.11.18 um 02:46 schrieb André Popovitch:
However, I'm not extremely knowledgeable about Haskell and I wouldn't say I'm the best teacher, so if anyone is willing to skim it or provide feedback that would be much appreciated!
One kind of typo that's common enough to become annoying: comma/fullstop and subsequent space were interchanged.
Well-written overall. I'm pretty sure different people will have different ideas about what's important about Haskell, but I think your take is valid. Besides, the knowledgeable people won't know what a newbie will find most interesting or enlightening about Haskell, so you'll have to get feedback from non-Haskellers to judge how successful that site is.
Some details aren't quite right (as is to be expected with anything that goes beyond a dozen pages).
E.g. mutability does increase the number of variables you have to keep track of, it multiplies the amount of information you have to keep track of for each variable (namely the set of locations where it is changed).
Stating that Haskell does not have side effects will cause cognitive dissonance. Technically, Haskell does not have it, but there's that technique to put state into a function that you return, hiding the state not in a transparent data object but in a pretty opaque function object. This is being systematically (ab?)used in many monads, and in practice, it has exactly the same benefit as a mutable global state (you don't have to thread it through every function call, it's globally available), and the same problems (you don't know where it might be changed). And then there's IO, which is a different way to do mutability except by name. (I have never been able to find out what the concept behind IO is. My best guess is that it's a framework to set up descriptions of IO interactions, which tend to be infinite which isn't a problem since Haskell is lazy, but this may well be totally wrong. SPJ seemingly takes this for granted, and all the docs I could find just described the mechanics of using it, often with an implicit assumption that IO is a magical mutability enclave in Haskell, which I'm pretty sure is not actually the case.) I don't know enough to give good advice how to be neither wrong enough to confuse newbies with cognitive dissonance nor correct enough to confuse newbies with the full truth.
You should mention that `rem` needs to be typed including backquotes. With some fonts, they might look similar enough to normal quotes, and then be ignored. (That point in the presentation might be a good place for a side remark, explaining how Haskell allows using operators as functions, and how it allows using functions as operators.)
A sidebar notice might help to explain that Haskell's function call syntax is nearer to mathematical than programming language conventions: Mathematicians write "sin x", not "sin(x)"; they use parentheses only when precedences get in the way, e.g. they'll write "(sin x) + 1" if needed, or maybe "sin (x + 1)" but the "(x + 1)" isn't function-call syntax, it's precedence-altering syntax. (As conventions go in mathematics, it's just a common one, not a universal one. Mathematicians are horribly sloppy about their conventions. In fact they are sloppy about anything except the topic they're currently interested in. Well, programmers are obsessed about irrelevant detail because compilers force them into that habit, from their perspective, so both sides are right in a sense ;-)
Okay, enough for now.
Regards, Jo _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- brandon s allbery kf8nh allbery.b@gmail.com

Am 03.11.18 um 09:31 schrieb Brandon Allbery:
Conceptualization of IO is difficult. One way to think about it is the result of (main :: IO a) is a program sent to an impure runtime to execute, with IO actions being compositions of instructions for the runtime… but this breaks down as soon as you discover unsafePerformIO.
I have been thinking that that's just a conceptual accident: pure functions are enough to get all the useful effects (and most of the downsides) of global variables and mutable state, but pure functions cannot do IO. So unsafePerformIO is the one unsafe thing that was kept, other unsafe operations were either dropped or never made it into Haskell (remember that Haskell was designed by people who had been doing pure nonstrict languages for a decade or more).
The closest that you'll get to the reality for GHC is that it pretty much is a haven for impurity: that it forces all impure functions to declare that in their types. If Haskell is truly pure, then IO must be pure as well. That's why I think that IO functions are just describing impure activity, not doing it. I have not been able to verify whether this is actually true. Maybe IO is really a wart on Haskell's purity. I'd hate it if it were, and I think the Haskell design group would have hated that as well. OTOH IO is one of three approaches, and it happened to be the one that became usable first, so it's not part of the initial design process. Then again I like to think that SPJ wouldn't even contemplate something impure - but I don't really know.

Go look at accursedUnutterablePerformIO (aka inlinePerformIO) sometime.
IO's just a barrier for impurity, and if you make the barrier leaky then
you can expect weird behavior at best.
On Sat, Nov 3, 2018 at 5:00 AM Joachim Durchholz
Am 03.11.18 um 09:31 schrieb Brandon Allbery:
Conceptualization of IO is difficult. One way to think about it is the result of (main :: IO a) is a program sent to an impure runtime to execute, with IO actions being compositions of instructions for the runtime… but this breaks down as soon as you discover unsafePerformIO.
I have been thinking that that's just a conceptual accident: pure functions are enough to get all the useful effects (and most of the downsides) of global variables and mutable state, but pure functions cannot do IO. So unsafePerformIO is the one unsafe thing that was kept, other unsafe operations were either dropped or never made it into Haskell (remember that Haskell was designed by people who had been doing pure nonstrict languages for a decade or more).
The closest that you'll get to the reality for GHC is that it pretty much is a haven for impurity: that it forces all impure functions to declare that in their types. If Haskell is truly pure, then IO must be pure as well. That's why I think that IO functions are just describing impure activity, not doing it. I have not been able to verify whether this is actually true. Maybe IO is really a wart on Haskell's purity. I'd hate it if it were, and I think the Haskell design group would have hated that as well. OTOH IO is one of three approaches, and it happened to be the one that became usable first, so it's not part of the initial design process. Then again I like to think that SPJ wouldn't even contemplate something impure - but I don't really know.
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- brandon s allbery kf8nh allbery.b@gmail.com

On 3 Nov 2018, at 08:59, Joachim Durchholz
wrote: If Haskell is truly pure, then IO must be pure as well. That's why I think that IO functions are just describing impure activity, not doing it.
I think that is exactly the best way to think about it (thanks!). Right now I am teaching "Introduction to Functional Programming" here, and have just introduced IO last week, so this is all in my head right now. A Haskell IO program is just a description of a sequence of IO actions (IO a), which *when evaluated* will produce side-.effects A function evaluation that produces side-effects when evaluated is a dangerous thing if used in an arbitrary fashion, but the IO abstraction(*) prevents danger by (i) having a fixed sequence of such actions, and (ii) never allowing a Haskell program to have a direct reference to the part of I/O state that gets modified. Haskell I/O is referentially transparent in that if you can show that two expressions of type IO a have the same I/O side-effecting behaviour (using the monad laws plus some IO-action semantics) then one can replace the other in any Haskell context without altering the IO behaviour of that context. Caveat: provided you don't use "unsafeXXXX" anywhere... (*) the IO abstraction happens to be an instance of a class called "Monad" that captures an interesting and useful pattern of sequential behaviour, but this is really a bit of a red-herring when it come to understanding how Haskell has both side-effecting IO and "purity" PS - "purity" and "referential transparency" are slippy concepts, quite hard to pin down, so it is unwise to put too much value into those terms...
I have not been able to verify whether this is actually true. Maybe IO is really a wart on Haskell's purity. I'd hate it if it were, and I think the Haskell design group would have hated that as well. OTOH IO is one of three approaches, and it happened to be the one that became usable first, so it's not part of the initial design process. Then again I like to think that SPJ wouldn't even contemplate something impure - but I don't really know. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-------------------------------------------------------------------- Andrew Butterfield Tel: +353-1-896-2517 Fax: +353-1-677-2204 Lero@TCD, Head of Foundations & Methods Research Group School of Computer Science and Statistics, Room G.39, O'Reilly Institute, Trinity College, University of Dublin http://www.scss.tcd.ie/Andrew.Butterfield/ --------------------------------------------------------------------

Am 05.11.18 um 11:40 schrieb Andrew Butterfield:
On 3 Nov 2018, at 08:59, Joachim Durchholz
mailto:jo@durchholz.org> wrote: If Haskell is truly pure, then IO must be pure as well. That's why I think that IO functions are just describing impure activity, not doing it. I think that is exactly the best way to think about it (thanks!). Right now I am teaching "Introduction to Functional Programming" here, and have just introduced IO last week, so this is all in my head right now.
A Haskell IO program is just a description of a sequence of IO actions (IO a), which *when evaluated* will produce side-.effects A function evaluation that produces side-effects when evaluated is a dangerous thing if used in an arbitrary fashion, but the IO abstraction(*) prevents danger by (i) having a fixed sequence of such actions, and (ii) never allowing a Haskell program to have a direct reference to the part of I/O state that gets modified.
I'm not sure how this model explains the sequencing that happens in IO. Haskell's evaluation model for function calls is lazy, i.e. it doesn't impose an order (and it does not even trigger evaluation). AFAIK the one strict thing in Haskell is pattern matching, so I'd look how pattern matching drives IO's sequencing - but I don't see it.
Caveat: provided you don't use "unsafeXXXX" anywhere...
Sure, that's just the loophole. Another red herring I think.
(*) the IO abstraction happens to be an instance of a class called "Monad" that captures an interesting and useful pattern of sequential behaviour, but this is really a bit of a red-herring when it come to understanding how Haskell has both side-effecting IO and "purity"
I like to say that "'monadic IO' is akin to saying 'associative arithmetic'." I.e. associativity is an important aspect of arithmetic just like monadicity for IO, but it's not what it was made for. I am not sure how far this analogy holds water.
PS - "purity" and "referential transparency" are slippy concepts, quite hard to pin down, so it is unwise to put too much value into those terms...
The definition I've been using is that an expression and its value are interchangeable without changing the semantics. I never ran into trouble with this - either because of my ignorance, or because that definition has the exactly right kind of vagueness, neither implying too much nor too little. Just my 2c. Regards, Jo

No state is modified, at least in ghc's implementation of IO. IO does carry
"state" around, but never modifies it; it exists solely to establish a data
dependency (passed to and returned from all IO actions; think s -> (a, s),
but IO uses unboxed state) that thereby enforces sequencing. Once it
reaches code generation, it discovers the runtime representation of the
"state" is nonexistent (size 0) as well as unboxed, and eliminates it and
all code related to it.
On Mon, Nov 5, 2018 at 4:54 PM Joachim Durchholz
Am 05.11.18 um 11:40 schrieb Andrew Butterfield:
On 3 Nov 2018, at 08:59, Joachim Durchholz
mailto:jo@durchholz.org> wrote: If Haskell is truly pure, then IO must be pure as well. That's why I think that IO functions are just describing impure activity, not doing it. I think that is exactly the best way to think about it (thanks!). Right now I am teaching "Introduction to Functional Programming" here, and have just introduced IO last week, so this is all in my head right now.
A Haskell IO program is just a description of a sequence of IO actions (IO a), which *when evaluated* will produce side-.effects A function evaluation that produces side-effects when evaluated is a dangerous thing if used in an arbitrary fashion, but the IO
abstraction(*)
prevents danger by (i) having a fixed sequence of such actions, and (ii) never allowing a Haskell program to have a direct reference to the part of I/O state that gets modified.
I'm not sure how this model explains the sequencing that happens in IO. Haskell's evaluation model for function calls is lazy, i.e. it doesn't impose an order (and it does not even trigger evaluation). AFAIK the one strict thing in Haskell is pattern matching, so I'd look how pattern matching drives IO's sequencing - but I don't see it.
Caveat: provided you don't use "unsafeXXXX" anywhere...
Sure, that's just the loophole. Another red herring I think.
(*) the IO abstraction happens to be an instance of a class called "Monad" that captures an interesting and useful pattern of sequential behaviour, but this is really a bit of a red-herring when it come to understanding how Haskell has both side-effecting IO and "purity"
I like to say that "'monadic IO' is akin to saying 'associative arithmetic'." I.e. associativity is an important aspect of arithmetic just like monadicity for IO, but it's not what it was made for.
I am not sure how far this analogy holds water.
PS - "purity" and "referential transparency" are slippy concepts, quite hard to pin down, so it is unwise to put too much value into those terms...
The definition I've been using is that an expression and its value are interchangeable without changing the semantics. I never ran into trouble with this - either because of my ignorance, or because that definition has the exactly right kind of vagueness, neither implying too much nor too little.
Just my 2c.
Regards, Jo _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- brandon s allbery kf8nh allbery.b@gmail.com

Am 05.11.18 um 23:27 schrieb Brandon Allbery:
No state is modified, at least in ghc's implementation of IO.
That's what I'd expect.
IO does carry "state" around, but never modifies it; it exists solely to establish a data dependency (passed to and returned from all IO actions; think s -> (a, s), In Haskell, a data dependency can impose constraints on evaluation order, but it isn't always linear: which subexpression is evaluated first depends on what a pattern match requests (at least in Haskell: Haskell's strict operation is the pattern match).
The ordering constraint becomes linear if each function calls just a single other function. I'm not sure that that's what happens with IO; input operations must allow choices and loops, making me wonder how linearity is established. It also makes me wonder how an IO expression would look like if fully evaluated; is it an infinite data structure, made useful only through Haskell's laziness, or is it something that's happening in the runtime? The other thing that's confusing me is that I don't see anything that starts the IO processing. There's no pattern match that triggers an evaluation. Not that this would explain much: If IO were constructed in a way that a pattern match starts IO execution, there'd still be the question what starts this first pattern match. Then there's the open question what happens if a program has two IO expressions. How does the runtime know which one to execute? Forgive me for my basic questions; I have tried to understand Haskell, but I never got the opportunity to really use it so I cannot easily test my hypotheses. Regards, Jo

Conceptually, the runtime does (runIO# Main.main RealWorld#). Practically,
ghc's implementation makes the sequencing stuff go away during code
generation, so the runtime just sticks Main.main on the pattern stack and
jumps into the STG to reduce it; there's your initial pattern match.
I guess I wasn't clear enough with respect to the state. Every IO action is
passed the "current state" and produces a "new state" (except that in
reality there is no state to pass or update, since it has no runtime
representation). A loop would be a sort of fold, where each iteration gets
the "current state" and produces (thisResult,"new state"), then the "new
state" is passed into the next loop iteration and the final result is the
collection of thisResult-s and the final "new state". Again, conceptually,
since the state vanishes during code generation, having served its purpose
in ensuring everything happens in order.
This is a bit hacky, since it assumes ghc never gets to see that nothing
ever actually uses or updates the state so it's forced to assume it's
updated and must be preserved. This is where bytestring's inlinePerformIO
(better known as accursedUnutterable…) went wrong, since it inlined the
whole thing so ghc could spot that the injected state (it being inlined
unsafePerformIO) was fake and never used, and started lifting stuff out of
loops, etc. — basically optimizing it as if it were pure code internally
instead of IO because it could see through IO's "purity mask".
On Tue, Nov 6, 2018 at 1:42 AM Joachim Durchholz
Am 05.11.18 um 23:27 schrieb Brandon Allbery:
No state is modified, at least in ghc's implementation of IO.
That's what I'd expect.
IO does carry "state" around, but never modifies it; it exists solely to establish a data dependency (passed to and returned from all IO actions; think s -> (a, s), In Haskell, a data dependency can impose constraints on evaluation order, but it isn't always linear: which subexpression is evaluated first depends on what a pattern match requests (at least in Haskell: Haskell's strict operation is the pattern match).
The ordering constraint becomes linear if each function calls just a single other function. I'm not sure that that's what happens with IO; input operations must allow choices and loops, making me wonder how linearity is established. It also makes me wonder how an IO expression would look like if fully evaluated; is it an infinite data structure, made useful only through Haskell's laziness, or is it something that's happening in the runtime?
The other thing that's confusing me is that I don't see anything that starts the IO processing. There's no pattern match that triggers an evaluation. Not that this would explain much: If IO were constructed in a way that a pattern match starts IO execution, there'd still be the question what starts this first pattern match.
Then there's the open question what happens if a program has two IO expressions. How does the runtime know which one to execute?
Forgive me for my basic questions; I have tried to understand Haskell, but I never got the opportunity to really use it so I cannot easily test my hypotheses.
Regards, Jo _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- brandon s allbery kf8nh allbery.b@gmail.com

I prefer to think about IO as an abstract data type of atomic "actions" i.e. the IO primitives (which we can extend via the FFI). The "run time system" to me is a black box that "executes" these actions. A Haskell program combines abstract IO primitives into a larger and more complex action using IO's bind and return, and it does so in a purely functional way. Evaluation order is completely irrelevant to this, because what matters is the result, not how we arrive at it. The bind operator instructs the run-time system to execute its left hand side, resulting in a value to be passed to the right hand side, which is then evaluated (in a purely functional way) to yield the next action etc. There is nothing mysterious about this IMO. If you have a working model for each of the IO primitives, this gives you a working model of what a complete Haskell program does. Cheers Ben

I don't really enjoy being "that person," but I read the title as meaning "Haskell for wise men" (as opposed to "wise people"). I don't know if you want to workshop names ("Haskell for the Wise"?), but as you've asked for feedback that's a glaring thing I'd note. Cheers, Tom
El 2 nov 2018, a las 21:46, André Popovitch
escribió: If you follow /r/haskell you may have seen it posted there as well, but I've written a short guide called Wise Man's Haskell which I hope people will find useful. However, I'm not extremely knowledgeable about Haskell and I wouldn't say I'm the best teacher, so if anyone is willing to skim it or provide feedback that would be much appreciated! _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
participants (6)
-
amindfv@gmail.com
-
Andrew Butterfield
-
André Popovitch
-
Ben Franksen
-
Brandon Allbery
-
Joachim Durchholz