
It’s been about 15 years on/off since I first looked at Monads. This weekend I finally sat down and really learned what they are, how they work. I found what looks like the seminal paper on them by Phil Wadler: https://page.mi.fu-berlin.de/scravy/realworldhaskell/materialien/the-essence... https://page.mi.fu-berlin.de/scravy/realworldhaskell/materialien/the-essence... I’m a pretty heavy Common Lisp guy, going on 30 years with it. I also did tons of SML and OCaml programming. But I only dipped my toe into Haskell a few times. What I was looking for was a more in-depth understanding of Monads and how they work. I remember reading that Wadler paper many years ago, and I was intrigued by the conciseness of changing the interpreter to do different instrumentation. I was hoping to find a magic bullet like that for my Lisp code. And I noticed that Lisp almost never makes any mention of Monads. Surely there is a benefit that could be had… Anyone else have Lisp experience using Monads? Did it offer some major enhancements for you? - DM

On Sat, 15 Apr 2017, David McClain wrote:
It’s been about 15 years on/off since I first looked at Monads. This weekend I finally sat down and really learned what they are, how they work. I found what looks like the seminal paper on them by Phil Wadler: https://page.mi.fu-berlin.de/scravy/realworldhaskell/materialien/the-essence...
I’m a pretty heavy Common Lisp guy, going on 30 years with it. I also did tons of SML and OCaml programming. But I only dipped my toe into Haskell a few times.
What I was looking for was a more in-depth understanding of Monads and how they work. I remember reading that Wadler paper many years ago, and I was intrigued by the conciseness of changing the interpreter to do different instrumentation. I was hoping to find a magic bullet like that for my Lisp code. And I noticed that Lisp almost never makes any mention of Monads. Surely there is a benefit that could be had…
Anyone else have Lisp experience using Monads? Did it offer some major enhancements for you?
- DM
Hi David, My lisp experience comes mostly from Scheme, but GNU Guix build tool/package manager has a monad abstraction: https://www.gnu.org/software/guix/manual/html_node/The-Store-Monad.html. They've even borrowed used >>= notation for bind. Best, Jack

Using monads without static typing sounds hard. When I do anything monadic,
I'm constantly using the :t directive to check type signatures, to make
sure I'm plugging the right thing into the right thing.
On Sat, Apr 15, 2017 at 11:43 AM, Jack Hill
On Sat, 15 Apr 2017, David McClain wrote:
It’s been about 15 years on/off since I first looked at Monads. This
weekend I finally sat down and really learned what they are, how they work. I found what looks like the seminal paper on them by Phil Wadler: https://page.mi.fu-berlin.de/scravy/realworldhaskell/materia lien/the-essence-of-functional-programming.pdf
I’m a pretty heavy Common Lisp guy, going on 30 years with it. I also did tons of SML and OCaml programming. But I only dipped my toe into Haskell a few times.
What I was looking for was a more in-depth understanding of Monads and how they work. I remember reading that Wadler paper many years ago, and I was intrigued by the conciseness of changing the interpreter to do different instrumentation. I was hoping to find a magic bullet like that for my Lisp code. And I noticed that Lisp almost never makes any mention of Monads. Surely there is a benefit that could be had…
Anyone else have Lisp experience using Monads? Did it offer some major enhancements for you?
- DM
Hi David,
My lisp experience comes mostly from Scheme, but GNU Guix build tool/package manager has a monad abstraction: < https://www.gnu.org/software/guix/manual/html_node/The-Store-Monad.html>. They've even borrowed used >>= notation for bind.
Best, Jack _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Jeff Brown | Jeffrey Benjamin Brown Website https://msu.edu/~brown202/ | Facebook https://www.facebook.com/mejeff.younotjeff | LinkedIn https://www.linkedin.com/in/jeffreybenjaminbrown(spammy, so I often miss messages here) | Github https://github.com/jeffreybenjaminbrown

I don’t think it matters much about typing in this case. I am free to name the Monads anything I like, probably a name indicative of its purpose. I do see that if you insist on living by the conventional >> and >>= naming that it would need type checking to help it select the correct thing. What’s more important is what a Monad does and enables. Everything else is window dressing. - DM
On Apr 15, 2017, at 12:18, Jeffrey Brown
wrote: Using monads without static typing sounds hard. When I do anything monadic, I'm constantly using the :t directive to check type signatures, to make sure I'm plugging the right thing into the right thing.
On Sat, Apr 15, 2017 at 11:43 AM, Jack Hill
mailto:jackhill@jackhill.us> wrote: On Sat, 15 Apr 2017, David McClain wrote: It’s been about 15 years on/off since I first looked at Monads. This weekend I finally sat down and really learned what they are, how they work. I found what looks like the seminal paper on them by Phil Wadler: https://page.mi.fu-berlin.de/scravy/realworldhaskell/materialien/the-essence... https://page.mi.fu-berlin.de/scravy/realworldhaskell/materialien/the-essence...
I’m a pretty heavy Common Lisp guy, going on 30 years with it. I also did tons of SML and OCaml programming. But I only dipped my toe into Haskell a few times.
What I was looking for was a more in-depth understanding of Monads and how they work. I remember reading that Wadler paper many years ago, and I was intrigued by the conciseness of changing the interpreter to do different instrumentation. I was hoping to find a magic bullet like that for my Lisp code. And I noticed that Lisp almost never makes any mention of Monads. Surely there is a benefit that could be had…
Anyone else have Lisp experience using Monads? Did it offer some major enhancements for you?
- DM
Hi David,
My lisp experience comes mostly from Scheme, but GNU Guix build tool/package manager has a monad abstraction: <https://www.gnu.org/software/guix/manual/html_node/The-Store-Monad.html https://www.gnu.org/software/guix/manual/html_node/The-Store-Monad.html>. They've even borrowed used >>= notation for bind.
Best, Jack _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Jeff Brown | Jeffrey Benjamin Brown Website https://msu.edu/~brown202/ | Facebook https://www.facebook.com/mejeff.younotjeff | LinkedIn https://www.linkedin.com/in/jeffreybenjaminbrown(spammy, so I often miss messages here) | Github https://github.com/jeffreybenjaminbrown _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

"DM" == David McClain
writes:
MD> Anyone else have Lisp experience using Monads? Did it offer some major MD> enhancements for you? In Lisp, I don't think the abstraction buys you much, because all the functionality you need (state, exceptions, runtime, etc) are always available to all expressions. They become an excellent way of representing "extensible composition" in a pure language that otherwise could not allow those things, but I don't see that you need it for an untyped language like Lisp. -- John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2

On 2017-04-16 03:13, John Wiegley wrote:
MD> Anyone else have Lisp experience using Monads? Did it offer some major enhancements for you?
In Lisp, I don't think the abstraction buys you much, because all the functionality you need (state, exceptions, runtime, etc) are always available to all expressions. They become an excellent way of representing "extensible composition" in a pure language that otherwise could not allow those things, but I don't see that you need it for an untyped language like Lisp.
May I present to you the lecture "Monads and Gonads" by Douglas Crockford (https://www.youtube.com/watch?v=b0EF0VTs9Dc). He shows that monads can make even a language like JavaScript useful (not his words). And the talk is one of the better monad tutorials out there to boot, not least because it's different from all the others. My own conclusion from it is that I would maybe even turn your argument around and claim that especially a language that has as few tools to keep programs sensible as JS can benefit greatly from the structure a monad provides. The benefits will be different for Lisp, but I imagine there might be some nice use cases as well. After all, structures are a great tool even if you're not forced to use them. ;) On another note: the more I work with monads and their brethren, the more I find myself thinking in terms of (a -> f b) functions instead of things like bind. Not only is it closer to the mathematical basis, but there's also the close relationship to lenses. I mention this because my feeling is that this type of function is a more usable puzzle piece in languages with limited syntax support. Especially if you also implement functors. But that's just unsubstantiated gut feeling. Cheers, MarLinn

Am 15.04.2017 um 18:56 schrieb David McClain:
It’s been about 15 years on/off since I first looked at Monads. This weekend I finally sat down and really learned what they are, how they work. I found what looks like the seminal paper on them by Phil Wadler:
https://page.mi.fu-berlin.de/scravy/realworldhaskell/materialien/the-essence...
What I think he's doing is: - Define a standard language core. - Instead of hardcoding function application, he defers that to a function passed in as an externally-supplied parameter. - Various cleanup definition so that the construction properly interoperates with primitive values and primitive functions. - Leave the type parameter for the externally-supplied function from the signatures, for Haskell's type inference to determine. So all the signatures look much simpler than they really are; one could call this either "excellent abstraction" or "muddling the water so nobody sees what's going on", depending on how well one can read this kind of idiom. I didn't see this as "particularly wow"; I have seen similar things being done in Java EE (all those "request/response interceptor chains"). What would "wow" me was a language where this kind of technique were automatically present even if the author of the expression evaluator didn't prepare for it. I.e. if the language provided a way to take such an expression evaluator as in the paper, and gave people a way to add a monad in a post-hoc fashion (because expression evaluator designers typically forget to add this kind of feature, unless they have written a dozen of these things already). This kind of thing is also relatively easy to do in Java or any other language with parametric polymorphism, though adding the function parameter all over the place would be a pain in the @ss. Yeah I know that Phil highlights this as a plus, "just add these three lines and everything is automatically parametrized" - having worked with sub-average programmers and having done a lot of legacy maintenance, I'd say it's a strong minus because a single definition at the innermost level will substantially change the signature of the whole thing, suddenly you have a monad parameter in all the types. (I'm not 100% sure whether this is such a big problem, because it seems that monads are about the only thing you need to worry about in practice - everybody is talking about monad transformers and such, I don't see arrows and all the other higher-order-typing constructions given much attention in practice). (Full disclosure: Java programmer with interest in better ways to construct software, long-time Haskell lurker with several attempts at wrapping the mind around various concepts in life and found roughly one-third of them firmly in the "nice but grossly overhyped"... even after 20 years I'm still not decided about whether monads are in that category or not (pun unintended, honest!).)

Heh!! My title “Wow” was not an expression of wonderment. Rather it was “Holy Cow! Monads, how can you guys be so obtuse?!” - DM
On Apr 16, 2017, at 03:00, Joachim Durchholz
wrote: Am 15.04.2017 um 18:56 schrieb David McClain:
It’s been about 15 years on/off since I first looked at Monads. This weekend I finally sat down and really learned what they are, how they work. I found what looks like the seminal paper on them by Phil Wadler:
https://page.mi.fu-berlin.de/scravy/realworldhaskell/materialien/the-essence...
What I think he's doing is: - Define a standard language core. - Instead of hardcoding function application, he defers that to a function passed in as an externally-supplied parameter. - Various cleanup definition so that the construction properly interoperates with primitive values and primitive functions. - Leave the type parameter for the externally-supplied function from the signatures, for Haskell's type inference to determine. So all the signatures look much simpler than they really are; one could call this either "excellent abstraction" or "muddling the water so nobody sees what's going on", depending on how well one can read this kind of idiom.
I didn't see this as "particularly wow"; I have seen similar things being done in Java EE (all those "request/response interceptor chains").
What would "wow" me was a language where this kind of technique were automatically present even if the author of the expression evaluator didn't prepare for it. I.e. if the language provided a way to take such an expression evaluator as in the paper, and gave people a way to add a monad in a post-hoc fashion (because expression evaluator designers typically forget to add this kind of feature, unless they have written a dozen of these things already). This kind of thing is also relatively easy to do in Java or any other language with parametric polymorphism, though adding the function parameter all over the place would be a pain in the @ss. Yeah I know that Phil highlights this as a plus, "just add these three lines and everything is automatically parametrized" - having worked with sub-average programmers and having done a lot of legacy maintenance, I'd say it's a strong minus because a single definition at the innermost level will substantially change the signature of the whole thing, suddenly you have a monad parameter in all the types. (I'm not 100% sure whether this is such a big problem, because it seems that monads are about the only thing you need to worry about in practice - everybody is talking about monad transformers and such, I don't see arrows and all the other higher-order-typing constructions given much attention in practice).
(Full disclosure: Java programmer with interest in better ways to construct software, long-time Haskell lurker with several attempts at wrapping the mind around various concepts in life and found roughly one-third of them firmly in the "nice but grossly overhyped"... even after 20 years I'm still not decided about whether monads are in that category or not (pun unintended, honest!).) _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 16.04.2017 um 23:07 schrieb David McClain:
Heh!! My title “Wow” was not an expression of wonderment. Rather it was “Holy Cow! Monads, how can you guys be so obtuse?!”
LOL okay :-D I think they suffer from being awesome even if not well-understood, so half-true statements about them get widely distributed. What I still don't know is if their face value is overhyped or not. The paper essentially displays how to do the visitor pattern in a functional way. Nothing to see here, move on... There are a few interesting aside points though: One, that this kind of injection can be done in a type-safe way (not a mean feat given the complexities of type parameters, but essentially follows from the choice of type system and, interestingly, the absence of side effects). This point is unrelated to monads, it's entirely conceivable that some other structure could have been used instead. Two, that the set of changes needed to turn a straightforward expression tree into an injectable one is so small. It's a mixed blessing, because the actual complexity of the definition is much larger than what you see at a glance. However, it's unrelated to monads as well, it's a consequence of having type inference and currying. Three, that the injected structure does not need to be more than merely conform to Monad, which is a pretty small core API (with "core" I mean the set of functions that cannot be expressed as composition of other API functions). Now this is indeed related to monads, but to me, it's unclear how important that finding is. I see monads being found all over the place, but is that a consequence of the Monad structure being so simple and permissive that it happens to match so many places in so many programs, is it because there are useful monad libraries that people actively make their code conform to the structure, is it perception bias (because monads are well-discussed so people can identify a Monad in their code but not an Arrow), or is Monad indeed tied to fundamental properties of coding? Frankly, I don't even know how to test the hypotheses here. What I do know is that we had a quite overhyped, simple but ubiquitous data structure earlier: Lists. In the 80ies, there was that meme that lists are the universal answers to data structures, mostly from the Lisp camp where lists were indeed fundamental, ubiquitous, and so simple they could be applied easily to all tasks. I haven't seen anybody seriously entertaining that thought in years; it will be interesting to see whether monads share that fate or prove to be something more fundamental. Just my 2 cents :-)

What I have been able to grok by translating into Lisp is that Monadic programming allows you to hide implicit accumulators, filters, control-flow (the Maybe Monad). Haskel absolutely needs Monads in order to force evaluation order in side-effecting places. Lisp doesn’t absolutely need Monads. I won’t really know if programming in Monadic style actually makes better looking code or not - cleaner, easier to grok, performant, organized? - until I try to apply it in anger on a larger project. But I am happy now that I finally understand this often obfuscated beast. The monster is actually a tiny mouse. I’m old enough now to have lived through at least two major programming fad cycles: Structured Programming followed by Object Oriented Programming. There are always nuggets to be found in any fad cycle, but unfortunately fad cycles always seem to swing to extremes. Psychologists would recognize that as borderline behavioral traits. I totally ignored the Programming Patterns cycle, so whenever I hear references to such patterns I first have to Google them. I can appreciate that practitioners would want a taxonomy, but for myself, I see programming patterns in an entirely different light. I have been designing and programming computers for longer than most readers on this site have been alive. But programming has always been only a tool for me, a sharp pencil, a side channel, in the pursuit of physics. Lisp has somehow managed to retain relevance to me throughout this entire period. I don’t choose languages for myself based on popularity. I want to get things done. And I want to do them quickly and move on to the next project. So, being in Lisp, I am not overly concerned with type safety. I don’t have to live in a community of programmers all contributing to the project code base. I am in complete control of the projects in their entirety. But if this were not the case, then certainly a strong case can be made in favor of type strictness. I did tons of SML and OCaml code about 10-15 years ago. I used to design languages and write commercial compilers for C and numerical analysis languages. But for my own uses, I find Lisp, on balance, to be my ultimate modeling clay. Monads were always a hole in my knowledge. I read Phil Wadler’s paper more than a decade ago, and I remember being impressed at the terseness of his little interpreter. But Monads kept being offered with all the extraneous Category Theory stuff, and all the black-box mysticism. Now that I have seen Crockford's video, translated his JavaScript into my Lisp, I thoroughly get what they are all about in languages like Javascript and now Lisp. They might be useful to me. - DM
On Apr 17, 2017, at 01:58, Joachim Durchholz
wrote: Am 16.04.2017 um 23:07 schrieb David McClain:
Heh!! My title “Wow” was not an expression of wonderment. Rather it was “Holy Cow! Monads, how can you guys be so obtuse?!”
LOL okay :-D
I think they suffer from being awesome even if not well-understood, so half-true statements about them get widely distributed.
What I still don't know is if their face value is overhyped or not.
The paper essentially displays how to do the visitor pattern in a functional way. Nothing to see here, move on... There are a few interesting aside points though:
One, that this kind of injection can be done in a type-safe way (not a mean feat given the complexities of type parameters, but essentially follows from the choice of type system and, interestingly, the absence of side effects). This point is unrelated to monads, it's entirely conceivable that some other structure could have been used instead.
Two, that the set of changes needed to turn a straightforward expression tree into an injectable one is so small. It's a mixed blessing, because the actual complexity of the definition is much larger than what you see at a glance. However, it's unrelated to monads as well, it's a consequence of having type inference and currying.
Three, that the injected structure does not need to be more than merely conform to Monad, which is a pretty small core API (with "core" I mean the set of functions that cannot be expressed as composition of other API functions). Now this is indeed related to monads, but to me, it's unclear how important that finding is. I see monads being found all over the place, but is that a consequence of the Monad structure being so simple and permissive that it happens to match so many places in so many programs, is it because there are useful monad libraries that people actively make their code conform to the structure, is it perception bias (because monads are well-discussed so people can identify a Monad in their code but not an Arrow), or is Monad indeed tied to fundamental properties of coding? Frankly, I don't even know how to test the hypotheses here. What I do know is that we had a quite overhyped, simple but ubiquitous data structure earlier: Lists. In the 80ies, there was that meme that lists are the universal answers to data structures, mostly from the Lisp camp where lists were indeed fundamental, ubiquitous, and so simple they could be applied easily to all tasks. I haven't seen anybody seriously entertaining that thought in years; it will be interesting to see whether monads share that fate or prove to be something more fundamental.
Just my 2 cents :-) _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 17.04.2017 um 16:56 schrieb David McClain:
I totally ignored the Programming Patterns cycle, so whenever I hear references to such patterns I first have to Google them. I can appreciate that practitioners would want a taxonomy, but for myself, I see programming patterns in an entirely different light. I have been designing and programming computers for longer than most readers on this site have been alive. But programming has always been only a tool for me, a sharp pencil, a side channel, in the pursuit of physics.
Programming is complicated enough to require its own terminology, so it's more than a mere sharp pencil. Design Patterns were a fad, but a pretty atypical one, because the seminal work (the famous "Gang-of-Four Book") made it pretty clear what a Design Pattern was good for and what it wouldn't do. So a lot of the hyperbole was cut short. :-)

On 2017-04-17 16:56, David McClain wrote:
Monads were always a hole in my knowledge. I read Phil Wadler’s paper more than a decade ago, and I remember being impressed at the terseness of his little interpreter. But Monads kept being offered with all the extraneous Category Theory stuff, and all the black-box mysticism. Now that I have seen Crockford's video, translated his JavaScript into my Lisp, I thoroughly get what they are all about in languages like Javascript and now Lisp.
Beware that Crockford's understanding of monads itself has been questioned (based on this talk). It's been ages since I saw his video and don't feel inclined to watch it again, but I fancy myself as someone who _does_ understand monads[1] and can remember that his presentation seemed *incredibly* fuzzy and imprecise. This might give one the impression that they do understand, when they really don't. I'm not saying that's the case here, but it's something that anyone watching his video should be aware of. The only way to be sure is to try to implement some monads and monad transformers for yourself. (I'd be absolutely *terrified*, personally, of doing it in a unityped langauge because there are *so* many ways to get it subtly wrong and not know it until you hit exactly the "right" edge case. Heck, I even find it somewhat scary in Scala because it's rather easy to accidentally do something impure -- though if you're abstract about your types, you can usually avoid such accidents.) Btw, from my perspecitve the thing that makes monads work for arbitrary side effects is really *data dependencies* + the fact that IO is a sort of "fake" State World monad where you pretend that you always fully evaluate the World argument to the "next step". For anything non-IO it's really just a way to do arbitrary *control flow* based on runtime values -- whereas e.g. Applicative doesn't let you do that[2]. A more direct approach would be Algebraic Effects. Regards, [1] At least at an "intermediate" level. If you just go by the type signatures and desugaring it doesn't seem all that complicated to me, but whatever. I've always been inclined towards algebra/symbol manipulation, so maybe it's just me. [2] You can kind of simulate it, but you basically end up evaluating everything and "wasting computation" by redundant evaluation. You can think of it as always having to evaluate both branches of all if's and then choosing the result afterwards. Obviously, that doesn't work if you have *actual* side effects.

who _does_ understand monads[1] and can remember that his presentation seemed *incredibly* fuzzy and imprecise.
Interesting comment… I was just reading up on the history of Pure Lisp, and the earliest attempts at reconciling mathematics with the early Lisp interpreters. In there they pointed out the initial disparity between denotational semantics and operational semantics. What I gained from Crockford’s presentation was a (my own in Lisp) clean implementation of operators called Unit and Bind, and I find those to be quite operationally simple, clever, and potentially useful. However, I have not attempted the verification with the 3 Monadic Lemmas on my own. Two of them should be quite obvious by inspection. The third, relating to functional composition awaits scrutiny. But beyond that, I find the general framework potentially useful. As I mentioned, they should be capable of accumulation (lists, binding trees, environments, etc), filtering out elements during accumulation, and control flow (the Maybe Monad). All of these are clear in my mind, and require only minor modifications to the specific implementations of the generic Bind operator. Now, I am hugely experienced in not only Lisp, but going all the way back down to pre-C languages and Assembly. So I have a pretty solid understanding of operational semantics. I am not personally frightened, for my own projects, of typing issues. I would be more scared if I had to exist in a group of programmers, all of whom contribute to the project code base. My own projects only rarely suffer from type issues. A bigger category of mistakes comes from simple keyboard typos and incorrect choice of algorithms. Apart from choice of algorithms, I find typically 1 bug per 50-100 LOC (operational lines). Compared to C, that is a factor of 10’s less bug prone. Add to that, the increased semantic value of 1 LOC of Lisp compared to C, and you get an efficiency increase of 1,000’s. But even in C, the typing is not the main issue, at least it hasn’t been for me. When I initially embarked on SML and OCaml, back in the late 1990’s, I was initially enthralled by the power of strong typing with inference. I was able to solve a nonlinear optimization over 150+ DOF using OCaml, whereas we had been stuck and bombing out after only 5 DOF using the strictly imperative Fortran-like language we had been using. I was so impressed, that word got out, and Phil Wadler invited me to write up my findings for his ACM Letters, which I did. And BTW… the breakthrough had absolutely nothing to do with typing and type inference. The success arose because of clean versus unclean control-flow. So you could say that OCaml adhered to “Structured Programming” in a better manner, which corresponds entirely to a fad cycle 2 layers back in time. But then after several years of pushing forward with OCaml, in writing math analysis compilers and image recognition systems, I began to find that, despite proper typing and clean compiles, system level issues would arise and cause my programs to fail. The holy grail of provably correct code came tumbling down in the face of practical reality. That’s when I began migrating back over to my old standby Lisp system. I live inside of my Lisp all day long, for days on end. It is a whole ecosystem. There is not crisp boundary of edit / compile / debug. It is all incremental and extensional. I think that kind of environment, regardless of language, is the holy grail of computing. I even had that kind of experience back in the late 1970’s when we were writing large telescope control systems in Forth. - DM
On Apr 17, 2017, at 11:12, Bardur Arantsson
wrote: On 2017-04-17 16:56, David McClain wrote:
Monads were always a hole in my knowledge. I read Phil Wadler’s paper more than a decade ago, and I remember being impressed at the terseness of his little interpreter. But Monads kept being offered with all the extraneous Category Theory stuff, and all the black-box mysticism. Now that I have seen Crockford's video, translated his JavaScript into my Lisp, I thoroughly get what they are all about in languages like Javascript and now Lisp.
Beware that Crockford's understanding of monads itself has been questioned (based on this talk). It's been ages since I saw his video and don't feel inclined to watch it again, but I fancy myself as someone who _does_ understand monads[1] and can remember that his presentation seemed *incredibly* fuzzy and imprecise.
This might give one the impression that they do understand, when they really don't.
I'm not saying that's the case here, but it's something that anyone watching his video should be aware of. The only way to be sure is to try to implement some monads and monad transformers for yourself.
(I'd be absolutely *terrified*, personally, of doing it in a unityped langauge because there are *so* many ways to get it subtly wrong and not know it until you hit exactly the "right" edge case. Heck, I even find it somewhat scary in Scala because it's rather easy to accidentally do something impure -- though if you're abstract about your types, you can usually avoid such accidents.)
Btw, from my perspecitve the thing that makes monads work for arbitrary side effects is really *data dependencies* + the fact that IO is a sort of "fake" State World monad where you pretend that you always fully evaluate the World argument to the "next step". For anything non-IO it's really just a way to do arbitrary *control flow* based on runtime values -- whereas e.g. Applicative doesn't let you do that[2]. A more direct approach would be Algebraic Effects.
Regards,
[1] At least at an "intermediate" level. If you just go by the type signatures and desugaring it doesn't seem all that complicated to me, but whatever. I've always been inclined towards algebra/symbol manipulation, so maybe it's just me.
[2] You can kind of simulate it, but you basically end up evaluating everything and "wasting computation" by redundant evaluation. You can think of it as always having to evaluate both branches of all if's and then choosing the result afterwards. Obviously, that doesn't work if you have *actual* side effects.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 17.04.2017 um 21:26 schrieb David McClain:
And BTW… the breakthrough had absolutely nothing to do with typing and type inference. The success arose because of clean versus unclean control-flow. So you could say that OCaml adhered to “Structured Programming” in a better manner, which corresponds entirely to a fad cycle 2 layers back in time.
That's not the first time I hear such statements. One other instance is Erlang users writing that it's not the functional aspect that is making it effective (Erlang isn't particularly strong in that area anyway), it's garbage collection (i.e. no invalid pointers to dereference).
But then after several years of pushing forward with OCaml, in writing math analysis compilers and image recognition systems, I began to find that, despite proper typing and clean compiles, system level issues would arise and cause my programs to fail. The holy grail of provably correct code came tumbling down in the face of practical reality.
What happened with these OCaml programs? I do see quite a few deficits in OCamls practices, but I don't know how relevant they are or if they are even related to your experience, so that would be interesting to me.

What happened with these OCaml programs? I do see quite a few deficits in OCamls practices, but I don't know how relevant they are or if they are even related to your experience, so that would be interesting to me.
Hi Joachim, Not sure what you are asking here? Do you mean, are they still extant? or do you want to know the failure mechanisms? The biggest program of the lot still remains, although I have not used it in nearly 10 years. It was a compiler for a math analysis language called NML (Numerical Modeling Language, Not ML, …). It borrowed the syntax of OCaml, was coded in OCaml, but was a vectorized math language with dynamic binding. For me, at the time, the biggest gain was treating all arrays as toroidal, so that I could easily play with FFT’s in multiple dimensions, and reach things like the tail of the array with negative indices. That, and the automatic vectorizing of overloaded math operators. That particular program began to fail as a result of the continual evolution of OCaml. It became a chore to constantly adapt to ever changing syntax patterns, on again / off again preprocessing of AST trees etc. But it did serve me well for about 5 years. About 20 KLOC of OCaml and 3 KLOC of supporting C glue code for access to BLAS, FFT’s, plotting graphics, etc. That NML effort was launched shortly after my initial success in the nonlinear optimization that I mentioned. This was also a huge success for me, because I had tried repeatedly over the previous 5 years to construct an NML in C, then C++, and I always crapped out when the line count reached around 100 KLOC. Being able to produce a fully working system in 20 KLOC was a huge win. The other programs compiled properly, and yet failed in execution, but my recall is fuzzy. One of them had the job of trying to watch automobile traffic through several cameras, day and night, and try to give consistent identity to the images seen through separate cameras, their motion patterns, and so forth. The cameras did not have overlapping fields of view. Not an easy problem, especially on rainy nights with street reflections from headlights. I don’t recall the exact failures, but things like data structure Maps started to fail with Address Faults. That should not ever happen, according to theory, but it really does. On and off, I did have quite a bit of success with OCaml. But getting clients to accept OCaml in the code base, c.a. 2000-2005 was a major problem. They all had their in-house experts telling them that C++ and Java was the way to go… (I managed to avoid Java in my career). Another major project was an Embedded Scheme compiler aimed at DSP’s c.a. 2008. That worked quite well for me, no failures to speak of. But I did learn during that effort that pattern matching has its limits. You often want to match subtrees and stubs, and just writing them out becomes an exercise in write-only code. A fallback effort done later in Lisp succeeded even better, but that’s probably because of the lower impedance mismatch between Lisp / Scheme, compared to OCaml / Scheme. I never learned nor made use of the OOP in OCaml. Never seemed necessary. But then they started implementing syntax sugar to represent optional args, etc. and quite frankly the effort looks really cheesy and difficult to read. You might as well just use Common Lisp and be done with it. Then around 2008 I got invited to speak at the European Common Lisp Meeting in Hamburg, and they made their preferences for Lisp be strongly known. Despite whatever I had in OCaml, that would be of no interest to them. So I began reentry into Lisp in a serious way, and I have been there since. Lisp for me had been on/off since about 1985. There were side diversions into Forth, C, Objective-C-like compilers, Smalltalk, and lots of DSP coding in bare-metal DSP Assembly. Transputers and Occam for a while. Massively parallel computing on a 4096 processor SIMD DAP Machine, 46 node MIMD machines based on DSP / Transputers for airborne LIDAR underwater mine detection systems. Lots of history… - DM
On Apr 17, 2017, at 23:56, Joachim Durchholz
wrote: Am 17.04.2017 um 21:26 schrieb David McClain:
And BTW… the breakthrough had absolutely nothing to do with typing and type inference. The success arose because of clean versus unclean control-flow. So you could say that OCaml adhered to “Structured Programming” in a better manner, which corresponds entirely to a fad cycle 2 layers back in time.
That's not the first time I hear such statements. One other instance is Erlang users writing that it's not the functional aspect that is making it effective (Erlang isn't particularly strong in that area anyway), it's garbage collection (i.e. no invalid pointers to dereference).
But then after several years of pushing forward with OCaml, in writing math analysis compilers and image recognition systems, I began to find that, despite proper typing and clean compiles, system level issues would arise and cause my programs to fail. The holy grail of provably correct code came tumbling down in the face of practical reality.
What happened with these OCaml programs? I do see quite a few deficits in OCamls practices, but I don't know how relevant they are or if they are even related to your experience, so that would be interesting to me. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Well, as I stated, I think failures come in several categories: 1. Machine Fault failures on successfully compiled code. That’s the big nasty one. Provably correct code can’t fail, right? But it sometimes does. And I think you can trace that to the need to interface with the real world. The underlying OS isn’t written in OCaml. Neither are many of the C-libs on which OCaml depends. In an ideal world, where everything in sight has gone through type checking, you might do better. But that will never be the case, even when the current FPL Fad reaches its zenith. About that time, new architectures will appear underneath you and new fad cycles will be in their infancy and chomping at the bits to become the next wave… Reality is a constantly moving target. We always gear up to fight the last war, not the unseen unkowns headed our way. 2. Experimental languages are constantly moving tools with ever changing syntax and semantics. It becomes a huge chore to keep your code base up to date, and sooner or later you will stop trying. I have been through that cycle so many times before. Not just in OCaml. There is also RSI/IDL, C++ compilers ever since 1985, even C compilers. The one constant, believe it or not, has been my Common Lisp. I’m still running code today, as part of my system environment, that I wrote back in 1990 and have never touched since then. It just continues to work. I don’t have one other example in another language where I can state that. 3. Even if the underlying language were fixed, the OS never changing, all libraries fully debugged and cast in concrete, the language that you use will likely have soft edges somewhere. For Pattern-based languages with full type decorations (e.g., row-type fields), attempting to match composite patterns over several tree layers becomes an exercise in write-only coding. The lack of a good macro facility in current FPL is hindering. Yes, you can do some of it functionally, but that implies a performance hit. Sometimes the FPL compilers will allow you to see the initial AST parse trees and you might be able to implement a macro facility / syntax bending at that point. But then some wise guy back at language HQ decides that the AST tool is not really needed by anyone, and then you get stung for having depended on it. The manual effort to recode what had been machine generated becomes too much to bear. 4. I will fault any language system for programming that doesn’t give you an ecosystem to live inside of, to allow for incremental extensions, test, recoding, etc. Edit / compile / debug cycles are awful. FPL allows you to generally minimize the debug cycle, by having you live longer at the edit / compile stage. But see some of the more recent work of Alan Kay and is now defunct project. They had entire GUI systems programmed in meta-language that compiles on the fly using JIT upon JIT. They make the claim that compilers were tools from another era, which they are, and that we should not be dealing with such things today. Heck, even the 1976 Forth system, crummy as it was, offered a live-in ecosystem for programming. So, that’s my short take on problems to be found in nearly all languages. There is no single perfect language, only languages best suited to some piece of your problem space. For me, Lisp offers a full toolbox and allows me to decide its shape in the moment. It doesn’t treat me like an idiot, and it doesn’t hold me to rigid world views. Is is perfect? Not by a long shot. But I haven’t found anything better yet… - DM
On Apr 18, 2017, at 08:18, Joachim Durchholz
wrote: Am 18.04.2017 um 16:57 schrieb David McClain:
Not sure what you are asking here? Do you mean, are they still extant? or do you want to know the failure mechanisms?
What failed for you when you were using OCaml. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

… step back about 40 years, and realize that people then understood the frailty of real machines. When I first interviewed with IBM back in 1982, before going with them on another project, one project group explained to me that they were working on error correction codes for large disk drive cache memories. Back in those days, the ceramic IC packages had trace amount of radioactive isotopes, and every once in a while an alpha particle would smash into a DRAM cell, splattering the charge away and dropping or setting bits. Now on a small memory system (a few KB in those days) the probability was low. But IBM was working on an 8 MB backing cache memory to speed up the disk I/O, and the likelihood of an errant bit worked out to about 1 per hour. That would be unacceptable. So IBM, like nearly all other memory manufacturers at the time, built memory systems with ECC. Fast forward to day, and we have cheap Chinese production machines. No memory ECC at all. And memory density is even higher than before. Oh yes, they ultimately found improved manufacturing processes so that the alpha particles aren’t anywhere near the problem today that they were in 1982. But higher bit density today, larger memories, solar flares, cosmic rays, and plenty of code bloat that depends on perfectly held memory… Who knows why an FPL module gives a machine fault? But they sometimes really do. - DM
On Apr 18, 2017, at 08:39, David McClain
wrote: Well, as I stated, I think failures come in several categories:
1. Machine Fault failures on successfully compiled code. That’s the big nasty one. Provably correct code can’t fail, right? But it sometimes does. And I think you can trace that to the need to interface with the real world. The underlying OS isn’t written in OCaml. Neither are many of the C-libs on which OCaml depends.
In an ideal world, where everything in sight has gone through type checking, you might do better. But that will never be the case, even when the current FPL Fad reaches its zenith. About that time, new architectures will appear underneath you and new fad cycles will be in their infancy and chomping at the bits to become the next wave… Reality is a constantly moving target. We always gear up to fight the last war, not the unseen unkowns headed our way.
2. Experimental languages are constantly moving tools with ever changing syntax and semantics. It becomes a huge chore to keep your code base up to date, and sooner or later you will stop trying. I have been through that cycle so many times before. Not just in OCaml. There is also RSI/IDL, C++ compilers ever since 1985, even C compilers.
The one constant, believe it or not, has been my Common Lisp. I’m still running code today, as part of my system environment, that I wrote back in 1990 and have never touched since then. It just continues to work. I don’t have one other example in another language where I can state that.
3. Even if the underlying language were fixed, the OS never changing, all libraries fully debugged and cast in concrete, the language that you use will likely have soft edges somewhere. For Pattern-based languages with full type decorations (e.g., row-type fields), attempting to match composite patterns over several tree layers becomes an exercise in write-only coding.
The lack of a good macro facility in current FPL is hindering. Yes, you can do some of it functionally, but that implies a performance hit. Sometimes the FPL compilers will allow you to see the initial AST parse trees and you might be able to implement a macro facility / syntax bending at that point. But then some wise guy back at language HQ decides that the AST tool is not really needed by anyone, and then you get stung for having depended on it. The manual effort to recode what had been machine generated becomes too much to bear.
4. I will fault any language system for programming that doesn’t give you an ecosystem to live inside of, to allow for incremental extensions, test, recoding, etc. Edit / compile / debug cycles are awful. FPL allows you to generally minimize the debug cycle, by having you live longer at the edit / compile stage.
But see some of the more recent work of Alan Kay and is now defunct project. They had entire GUI systems programmed in meta-language that compiles on the fly using JIT upon JIT. They make the claim that compilers were tools from another era, which they are, and that we should not be dealing with such things today.
Heck, even the 1976 Forth system, crummy as it was, offered a live-in ecosystem for programming.
So, that’s my short take on problems to be found in nearly all languages. There is no single perfect language, only languages best suited to some piece of your problem space. For me, Lisp offers a full toolbox and allows me to decide its shape in the moment. It doesn’t treat me like an idiot, and it doesn’t hold me to rigid world views. Is is perfect? Not by a long shot. But I haven’t found anything better yet…
- DM
On Apr 18, 2017, at 08:18, Joachim Durchholz
wrote: Am 18.04.2017 um 16:57 schrieb David McClain:
Not sure what you are asking here? Do you mean, are they still extant? or do you want to know the failure mechanisms?
What failed for you when you were using OCaml. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 18.04.2017 um 17:39 schrieb David McClain:
Well, as I stated, I think failures come in several categories:
1. Machine Fault failures on successfully compiled code. That’s the big nasty one. Provably correct code can’t fail, right? But it sometimes does. And I think you can trace that to the need to interface with the real world. The underlying OS isn’t written in OCaml. Neither are many of the C-libs on which OCaml depends.
I think that any language will depend on C libs, and so all languages are afflicted by that. Of course some languages have their own software stack and need less external libs, so this depends.
2. Experimental languages are constantly moving tools with ever changing syntax and semantics. It becomes a huge chore to keep your code base up to date, and sooner or later you will stop trying. I have been through that cycle so many times before. Not just in OCaml. There is also RSI/IDL, C++ compilers ever since 1985, even C compilers.
The one constant, believe it or not, has been my Common Lisp. I’m still running code today, as part of my system environment, that I wrote back in 1990 and have never touched since then. It just continues to work. I don’t have one other example in another language where I can state that.
Java :-) Breakage does happen, but it's rare, though the incident rate can go up if you don't know what you're doing (some people routinely use APIs wrongly, read Internet recipes rather than the docs that come with the Java libs, and are caught flat-footed when internals change - I guess you don't have that problem because you know your system well enough to avoid this kind of pitfall).
3. Even if the underlying language were fixed, the OS never changing, all libraries fully debugged and cast in concrete, the language that you use will likely have soft edges somewhere. For Pattern-based languages with full type decorations (e.g., row-type fields), attempting to match composite patterns over several tree layers becomes an exercise in write-only coding.
The lack of a good macro facility in current FPL is hindering. Yes, you can do some of it functionally, but that implies a performance hit. Sometimes the FPL compilers will allow you to see the initial AST parse trees and you might be able to implement a macro facility / syntax bending at that point. But then some wise guy back at language HQ decides that the AST tool is not really needed by anyone, and then you get stung for having depended on it. The manual effort to recode what had been machine generated becomes too much to bear.
I think Lisp-style macros are too powerful. You can get excellent results as long as everybody involved knows all the relevant macros and their semantics perfectly well, and for one-man teams like yourself this can work very well. If you need to work with average programmers, this fails. In such an environment, you need to be able to read other peoples' code, and if they can define macros, you don't really know anymore what's happening.
4. I will fault any language system for programming that doesn’t give you an ecosystem to live inside of, to allow for incremental extensions, test, recoding, etc. Edit / compile / debug cycles are awful. FPL allows you to generally minimize the debug cycle, by having you live longer at the edit / compile stage.
But see some of the more recent work of Alan Kay and is now defunct project. They had entire GUI systems programmed in meta-language that compiles on the fly using JIT upon JIT. They make the claim that compilers were tools from another era, which they are, and that we should not be dealing with such things today.
Well... the Java compiler doesn't really "exist" in a modern IDE, stuff is being compiled on-the-fly in the background, even with live reload inside a running program (with some restrictions, so this isn't perfect - but then Java is anything but perfect). So I do not think that compilers per se are a problem, though they way they often need to be used can be.
For me, Lisp offers a full toolbox and allows me to decide its shape in the moment. It doesn’t treat me like an idiot, and it doesn’t hold me to rigid world views.
Problem with that is that empowering the programmer makes it harder to be 100% sure what a given piece of code does. There might be macros involved, there might be metaprogramming involved, or some kind of multiple dispatch with nonobvious defaulting rules, or a gazillion of other things that you have to be aware of. It's not a problem if you know your system from the inside out, but it doesn't scale well to tasks that need to be done by a team.

Java :-) Breakage does happen, but it's rare, though the incident rate can go up if you don't know what you're doing (some people routinely use APIs wrongly, read Internet recipes rather than the docs that come with the Java libs, and are caught flat-footed when internals change - I guess you don't have that problem because you know your system well enough to avoid this kind of pitfall).
I have never programmed in Java. I have watched others do so, and I interfaced some C++ code as a library to a Java app once. But I don’t live much in the typical commercial software world. I am a physicist (astrophysicist) and the closest I came to commercial software on / off were early versions of extended C compilers that I wrote, image analysis systems for MacDonald’s (!!?), and mostly the international war machine. My area has mostly been signal and image processing. I have never used a database in anger either, even though I have authored OODBMS systems. I wouldn’t really know how to wrap an app around a database, or even why you would do such a thing. That probably sounds really odd to anyone younger than about 40-50 years old. I have never written a web page. I don’t understand the fascination with HTML or XML or SGML or whatever you call it. Elaborated S-expressions that don’t seem to follow their own syntax rules.
I think Lisp-style macros are too powerful. You can get excellent results as long as everybody involved knows all the relevant macros and their semantics perfectly well, and for one-man teams like yourself this can work very well. If you need to work with average programmers, this fails. In such an environment, you need to be able to read other peoples' code, and if they can define macros, you don't really know anymore what's happening.
Too powerful for whom? Again, I’m lucky to be master of my own domain. I don’t have to share code with anyone. For me, the biggest errors come from incorrect algorithm choice, not coding, not typing, not library API incompatibilities. Coding I can do with my eyes closed, except for all the typos… but then even with eyes open those still happen. My very first year in computing, 1969, was the most frustrating year in my entire life. Everything I did was called wrong by the computer. Almost drove me to the point of tears. But then one day I woke up and I was one with the machine. Been that way for now, 47 years.
Problem with that is that empowering the programmer makes it harder to be 100% sure what a given piece of code does. There might be macros involved, there might be metaprogramming involved, or some kind of multiple dispatch with nonobvious defaulting rules, or a gazillion of other things that you have to be aware of. It's not a problem if you know your system from the inside out, but it doesn't scale well to tasks that need to be done by a team.
Isn’t that always the case, and always will be the case? Nobody can be certain how some library or application was written. You might have some pretty strong hints and hunches, but how can you be sure? Seems like the best bet is to treat everything as untrusted black boxes. Erlang probably offers the best security model against dependencies. If you look at any major element of code in production, it was probably written in a hurry under near unreasonable demands because hardware engineering overran their budgets and took some of yours, and the programmer may or may not have well understood his tools and language, or even if he did, he might have been daydreaming of being on the beach with his girlfriend instead of in this meat locker with the rest of you guys… My experience has been that hardware companies produce the most gawd-awful code you can imagine. Software shops vary… take M$ for instance - the bane of my existence. Looks like code gets pushed out the door and they rely on the audience to test it. Apple looks to me like a room full of really bright young minds, with no adult supervision. I guess if I could offer any advice to other younger programmers, it would be to push yourself to become a polyglot to expand your mental horizons of what is possible, and a historian to understand what went before you and why, so that you don’t keep repeating the same stupid mistakes we did. I could not be a production programmer. I’m definitely too much of a cowboy and a lone wolf. Teams are anathema to me. I need deep periods of isolation to think. People who manage to perform in production coding environments must have very different personalities from me. - DM

"I guess if I could offer any advice to other younger programmers, it would be to push yourself to become a polyglot to expand your mental horizons of what is possible, and a historian to understand what went before you and why, so that you don’t keep repeating the same stupid mistakes we did."
By no means a young programmer here, but am currently learning Haskell as my first functional language for fun. Occaasionally I'll lament the fact that functional programming was not part of my CS curriculum back in the mid 90's. What you say is true because at the end of the day, being a self teacher has been rewarding. Can't really call myself a polyglot though, as I tend to deep-dive into interests.
"I could not be a production programmer. I’m definitely too much of a cowboy and a lone wolf. Teams are anathema to me. I need deep periods of isolation to think. People who manage to perform in production coding environments must have very different personalities from me."
I used to, mainly C/C++. There is no "I" in team... Our personalities must be similar : )
Regards,
Andrea
Sent with [ProtonMail](https://protonmail.com) Secure Email.
-------- Original Message --------
Subject: Re: [Haskell-cafe] Wow Monads!
Local Time: April 18, 2017 11:57 AM
UTC Time: April 18, 2017 4:57 PM
From: dbm@refined-audiometrics.com
To: haskell-cafe
Java :-) Breakage does happen, but it's rare, though the incident rate can go up if you don't know what you're doing (some people routinely use APIs wrongly, read Internet recipes rather than the docs that come with the Java libs, and are caught flat-footed when internals change - I guess you don't have that problem because you know your system well enough to avoid this kind of pitfall).
I have never programmed in Java. I have watched others do so, and I interfaced some C++ code as a library to a Java app once. But I don’t live much in the typical commercial software world. I am a physicist (astrophysicist) and the closest I came to commercial software on / off were early versions of extended C compilers that I wrote, image analysis systems for MacDonald’s (!!?), and mostly the international war machine. My area has mostly been signal and image processing. I have never used a database in anger either, even though I have authored OODBMS systems. I wouldn’t really know how to wrap an app around a database, or even why you would do such a thing. That probably sounds really odd to anyone younger than about 40-50 years old. I have never written a web page. I don’t understand the fascination with HTML or XML or SGML or whatever you call it. Elaborated S-expressions that don’t seem to follow their own syntax rules.
I think Lisp-style macros are too powerful. You can get excellent results as long as everybody involved knows all the relevant macros and their semantics perfectly well, and for one-man teams like yourself this can work very well. If you need to work with average programmers, this fails. In such an environment, you need to be able to read other peoples' code, and if they can define macros, you don't really know anymore what's happening.
Too powerful for whom? Again, I’m lucky to be master of my own domain. I don’t have to share code with anyone. For me, the biggest errors come from incorrect algorithm choice, not coding, not typing, not library API incompatibilities. Coding I can do with my eyes closed, except for all the typos… but then even with eyes open those still happen. My very first year in computing, 1969, was the most frustrating year in my entire life. Everything I did was called wrong by the computer. Almost drove me to the point of tears. But then one day I woke up and I was one with the machine. Been that way for now, 47 years.
Problem with that is that empowering the programmer makes it harder to be 100% sure what a given piece of code does. There might be macros involved, there might be metaprogramming involved, or some kind of multiple dispatch with nonobvious defaulting rules, or a gazillion of other things that you have to be aware of. It's not a problem if you know your system from the inside out, but it doesn't scale well to tasks that need to be done by a team.
Isn’t that always the case, and always will be the case? Nobody can be certain how some library or application was written. You might have some pretty strong hints and hunches, but how can you be sure? Seems like the best bet is to treat everything as untrusted black boxes. Erlang probably offers the best security model against dependencies. If you look at any major element of code in production, it was probably written in a hurry under near unreasonable demands because hardware engineering overran their budgets and took some of yours, and the programmer may or may not have well understood his tools and language, or even if he did, he might have been daydreaming of being on the beach with his girlfriend instead of in this meat locker with the rest of you guys… My experience has been that hardware companies produce the most gawd-awful code you can imagine. Software shops vary… take M$ for instance - the bane of my existence. Looks like code gets pushed out the door and they rely on the audience to test it. Apple looks to me like a room full of really bright young minds, with no adult supervision. I guess if I could offer any advice to other younger programmers, it would be to push yourself to become a polyglot to expand your mental horizons of what is possible, and a historian to understand what went before you and why, so that you don’t keep repeating the same stupid mistakes we did. I could not be a production programmer. I’m definitely too much of a cowboy and a lone wolf. Teams are anathema to me. I need deep periods of isolation to think. People who manage to perform in production coding environments must have very different personalities from me. - DM _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On 19/04/2017, at 3:13 PM, Atrudyjane via Haskell-Cafe
wrote: I used to, mainly C/C++. There is no "I" in team... Our personalities must be similar : )
It's hidden, but there is definitely "a me" in "team". And then it's time for "T". (Obscure Lisp reference.) Someone wrote:
I think Lisp-style macros are too powerful.
It's true that higher-order functions can do a lot of the things people used macros to do, and better. However, having said "Lisp-style macros", Lisp is a tree with many branches. How about Scheme-style macros?
Problem with that is that empowering the programmer makes it harder to be 100% sure what a given piece of code does.
This is also true in Haskell. Show me a piece of code golf with Arrows and LemurCoSprockets and such all over the place and I haven't a clue. Total bewilderment. Heck, show me *undocumented* code in a language without macros, classes, or higher-order functions (Fortran 95? COBOL 85?) and I'll be just as baffled, if it is big enough. (I've been staring at some old numeric code recently. One page is quite big enough...)

http://www.offcenterdesigns.net/wp-content/uploads/wp-checkout/images/i-foun...
I think the problem with macros in general is that their semantics is defined purely in terms of code they generate. Not in terms of what that code actually does. As for me, I want to think about the things I need to achieve, not about the code to write.
Az iPademről küldve
2017. ápr. 19. dátummal, 6:24 időpontban Richard A. O'Keefe
On 19/04/2017, at 3:13 PM, Atrudyjane via Haskell-Cafe
wrote: I used to, mainly C/C++. There is no "I" in team... Our personalities must be similar : )
It's hidden, but there is definitely "a me" in "team". And then it's time for "T". (Obscure Lisp reference.)
Someone wrote:
I think Lisp-style macros are too powerful.
It's true that higher-order functions can do a lot of the things people used macros to do, and better.
However, having said "Lisp-style macros", Lisp is a tree with many branches. How about Scheme-style macros?
Problem with that is that empowering the programmer makes it harder to be 100% sure what a given piece of code does.
This is also true in Haskell. Show me a piece of code golf with Arrows and LemurCoSprockets and such all over the place and I haven't a clue. Total bewilderment. Heck, show me *undocumented* code in a language without macros, classes, or higher-order functions (Fortran 95? COBOL 85?) and I'll be just as baffled, if it is big enough. (I've been staring at some old numeric code recently. One page is quite big enough...)
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On Apr 18, 2017, at 22:05, MigMit
wrote: http://www.offcenterdesigns.net/wp-content/uploads/wp-checkout/images/i-foun... http://www.offcenterdesigns.net/wp-content/uploads/wp-checkout/images/i-foun...
I think the problem with macros in general is that their semantics is defined purely in terms of code they generate. Not in terms of what that code actually does. As for me, I want to think about the things I need to achieve, not about the code to write.
Those first two sentences were a joke, eh? Macros exist to shape the language to suit the problem domain and prevent massive typos and fingers cramping up. I played around with Sanitary Macrology, or whatever they are called, in Dylan back in the late 90’s. Frankly, I find them maddening. I much prefer the Common Lisp style of macrology. But read up on macros in Quiennec’s book, “Lisp in Small Pieces”. They exist in a kind of no-man’s land in language stages. There really isn’t any one best kind of macrology. Back in OCaml land they used to have a preprocessor that could give you access to the AST’s from a first pass of compiling, and I made liberal use of a custom tree rewriter in order to give my vectorized / overloaded math language NML a better shape than plain OCaml syntax. That is yet another kind of macrology. And potentially just as powerful, if less convenient to use on the fly. - DM

Of course not. If my language needs much tweaking to fit the problem domain, it usually means I chose the wrong language in the first place. If macros a needed to prevent TYPOS, it means the language is doomed.
Az iPademről küldve
2017. ápr. 19. dátummal, 8:52 időpontban David McClain
On Apr 18, 2017, at 22:05, MigMit
wrote: http://www.offcenterdesigns.net/wp-content/uploads/wp-checkout/images/i-foun...
I think the problem with macros in general is that their semantics is defined purely in terms of code they generate. Not in terms of what that code actually does. As for me, I want to think about the things I need to achieve, not about the code to write.
Those first two sentences were a joke, eh? Macros exist to shape the language to suit the problem domain and prevent massive typos and fingers cramping up.
I played around with Sanitary Macrology, or whatever they are called, in Dylan back in the late 90’s. Frankly, I find them maddening. I much prefer the Common Lisp style of macrology. But read up on macros in Quiennec’s book, “Lisp in Small Pieces”. They exist in a kind of no-man’s land in language stages. There really isn’t any one best kind of macrology.
Back in OCaml land they used to have a preprocessor that could give you access to the AST’s from a first pass of compiling, and I made liberal use of a custom tree rewriter in order to give my vectorized / overloaded math language NML a better shape than plain OCaml syntax. That is yet another kind of macrology. And potentially just as powerful, if less convenient to use on the fly.
- DM
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 19.04.2017 um 08:52 schrieb David McClain:
I played around with Sanitary Macrology, or whatever they are called,
Maybe you remember "hygienic macros". Basically, avoid the C preprocessor crap. I don't know what the exact rules were, and I suspect every language community had its own definition of "hygienic" anyway.

On 19/04/2017, at 5:05 PM, MigMit
wrote: http://www.offcenterdesigns.net/wp-content/uploads/wp-checkout/images/i-foun...
I think the problem with macros in general is that their semantics is defined purely in terms of code they generate. Not in terms of what that code actually does. As for me, I want to think about the things I need to achieve, not about the code to write.
I find this a rather baffling comment. Let's take SRFI-8, for example, which defines the syntax (receive <variables> <expression> <body>) which the SRFI defines *semantically* (paraphrased): - the <expression> is evaluated - the values are bound to the <variables> - the expressions in the <body> are evaluated sequentially - the values of the last <body> expression are the values of the (receive ...) expression. The syntax is *implemented* thus: (define-syntax receive (syntax-rules () ((receive formals expression body ...) (call-with-values (lambda () expression) (lambda formals body ...))))) but the semantics is *defined* by the specification. The whole point of a macro such as this is for the user *NOT* to "think about the code to write". The code that gets generated is of no interest to the ordinary programmer whatsoever. (Reflecting on the subject of this thread, 'receive' is actually pretty close to >>= .)

Am 19.04.2017 um 06:24 schrieb Richard A. O'Keefe:
Someone wrote:
That was me :-)
I think Lisp-style macros are too powerful.
It's true that higher-order functions can do a lot of the things people used macros to do, and better.
However, having said "Lisp-style macros", Lisp is a tree with many branches. How about Scheme-style macros?
I can't tell the difference anymore, I was going through Common Lisp docs at one time and through the Scheme docs at another time and don't remember anymore which did what. I also have to say that I didn't pick up all the details - I was even more averse to macros then than I am today, so I didn't pay too much attention. The general finding, however, was what both were very, very heavy on enabling all kinds of cool and nifty features, but also very, very weak on making it easy to understand existing code. Essentially, each ecosystem with its set of macros, multiple-dispatch conventions, hook systems and whatnow, to the point that it is its own language that you have to learn. That's essentially why I lost interest: None of what I was learning would enable me to actually work in any project, I'd have to add more time to even understand the libraries, let alone contribute. The other realization was that these extension mechanisms could make code non-interoperable, and there was no easy way to find out whether there would be a problem or not. This would be a non-problem for lone-wolf or always-the-same-team we-reinvent-wheels-routinely scenarios, but it's a closed world that an outsider cannot get into without committing 100%. Java does a lot of things badly, but it got this one right. I can easily integrate whatever library I want, and it "just works", there are no linker errors, incompatible memory layouts, and what else is making the reuse external C++ libraries so hard. Nowadays, the first step in any new project is to see what libraries we need. Integration is usually just a configuration line in the build tool (you don't have build tools in Lisp so that's a downside, but the build tools are generally trivial when it comes to library integration; things get, er, "interesting" if you task the build tool with all the other steps in the pipeline up to and including deployment, but there it's that the environments are so diverse that no simple solution is possible).
Problem with that is that empowering the programmer makes it harder to be 100% sure what a given piece of code does.
This is also true in Haskell. Show me a piece of code golf with Arrows and LemurCoSprockets and such all over the place and I haven't a clue. Total bewilderment.
Heh, I can subscribe to that. Even finding out what Monad does took me far too long.

On 19/04/2017, at 7:23 PM, Joachim Durchholz
wrote: However, having said "Lisp-style macros", Lisp is a tree with many branches. How about Scheme-style macros?
I can't tell the difference anymore,
MAJOR differences. Lisp macros: run *imperative* *scope-insensitive* macros to do source to source transformation at compilation or interpretation time. Scheme macros: run *declarative* *scope-sensitive* macros to do source to source transformation at compilation or interpretation time.
The general finding, however, was what both were very, very heavy on enabling all kinds of cool and nifty features, but also very, very weak on making it easy to understand existing code.
With respect to Common Lisp, I'm half-way inclined to give you an argument. Macros, used *carefully*, can give massive improvements in readability. Let's face it, one bit of Lisp (APL, SML, OCaml, Haskell) code looks pretty much like another. The way I've seen macros used -- and the way I've used them myself -- makes code MORE understandable, not less. All it takes is documentation. In Scheme, however, I flatly deny that macros in any way make it harder to understand existing code than functions do.
Essentially, each ecosystem with its set of macros, multiple-dispatch conventions, hook systems and whatnow, to the point that it is its own language that you have to learn.
And how is this different from each ecosystem having its own set of operators (with the same names but different precedence and semantics) or its own functions (with the same name but different arities and semantics)? This is why Common Lisp has packages and R6RS and later Scheme have modules. Yes, different subsystems can have different vocabularies of macros, just as different modules in Haskell can have different vocabularies of types, operators, and functions. They don't interfere, thanks to the package or module system.
That's essentially why I lost interest: None of what I was learning would enable me to actually work in any project, I'd have to add more time to even understand the libraries, let alone contribute.
That *really* doesn't sound one tiny bit different from trying to work on someone else's Java or Haskell code. My own experience with Lisp was writing tens of thousands of lines to fit into hundreds of thousands, and my experience was very different from yours. Macros were *defined* sparingly, *documented* thoroughly, and *used* freely. Result: clarity.
The other realization was that these extension mechanisms could make code non-interoperable,
I fail to see how define-syntax encapsulated in modules could possibly make Scheme code non-interoperable.
and there was no easy way to find out whether there would be a problem or not.
Scheme answer: there isn't. OK, in R5RS there _was_ a problem that two files could both try to define the same macro, but that's not different in kind from the problem that two files could try to define the same function. In R6RS and R7RS, the module system catches that.
Java does a lot of things badly, but it got this one right. I can easily integrate whatever library I want, and it "just works", there are no linker errors, incompatible memory layouts, and what else is making the reuse external C++ libraries so hard.
Fell off chair laughing hysterically. Or was that screaming with remembered pain? I am sick to death of Java code *NOT* "just working". I am particularly unthrilled about Java code that *used* to work ceasing to work. I am also sick of working code that gets thousands of deprecation warnings. I am particularly tired of having to grovel through thousands of pages of bad documentation, to the point where it's often less effort to write my own code. I am *ALSO* sick of trying to match up Java libraries that only build with Ant and Java libraries that only build with Maven. (Yes, I know about just putting .jar files in the right place. I've also heard of the Easter Bunny.)
Nowadays, the first step in any new project is to see what libraries we need. Integration is usually just a configuration line in the build tool (you don't have build tools in Lisp so that's a downside,
Wrong. There are build tools for Common Lisp and have been for a long time. I didn't actually know that myself until I saw a student -- who was using Common Lisp without any of us having ever mentioned it to him -- using one. Look, it's really simple. If programmers *WANT* to write readable code and are *ALLOWED TIME* to write readable code, they will. Whatever the language. If they have other priorities or constraints, they won't. Whatever the language.

Am 19.04.2017 um 11:42 schrieb Richard A. O'Keefe:
The general finding, however, was what both were very, very heavy on enabling all kinds of cool and nifty features, but also very, very weak on making it easy to understand existing code.
With respect to Common Lisp, I'm half-way inclined to give you an argument.
:-) As I said, I don't really know any details anymore.
Macros, used *carefully*, can give massive improvements in readability.
Oh, definitely! Actually this applies even to C-style macros, though they're pretty limited what they can do before you start to hit the more obscure preprocessor features.
Let's face it, one bit of Lisp (APL, SML, OCaml, Haskell) code looks pretty much like another. The way I've seen macros used -- and the way I've used them myself -- makes code MORE understandable, not less.
All it takes is documentation.
Good documentation. That's where that Smalltalk system I experimented with (Squeak I think) was a bit weak on: It documented everything, but it was pretty thin on preconditions so you had to experiment, and since parameters were passed through layers and layers of code it was really hard to determine what part of the system was supposed to do what.
In Scheme, however, I flatly deny that macros in any way make it harder to understand existing code than functions do.
Fair enough.
Essentially, each ecosystem with its set of macros, multiple-dispatch conventions, hook systems and whatnow, to the point that it is its own language that you have to learn.
And how is this different from each ecosystem having its own set of operators (with the same names but different precedence and semantics) or its own functions (with the same name but different arities and semantics)?
Different functions with different arities are actually that: different. Just consider the arity a part of the name. Different semantics (in the sense of divergence) - now that would be a major API design problem. It's the kind of stuff you see in PHP, but not very much elsewhere. (Operators are just functions with a funny syntax.) However there's a real difference: If the same name is dispatched at runtime, you're in trouble unless you can tell that the interesting parts of the semantics are always the same. Languages with a notion of subtype, or actually any kind of semantic hierarchy between functions allow you to reason about the minimum guaranteed semantics and check that the caller does it right. Any language with subtyping or a way to associate an abstract data type to an interface can do this; I didn't see anything like that in Smalltalk (where subclasses tend to sort-of be subtypes but no guarantees), or in any Lisp variant that I ever investigated, so this kind of thing is hard. Now there's still a huge difference between just type guarantees (C++, Java, OCaml), design by contract (Eiffel), and provable design by contract (I know of no language that does this, though you can approximate that with a sufficiently strong type system).
Yes, different subsystems can have different vocabularies of macros, just as different modules in Haskell can have different vocabularies of types, operators, and functions. They don't interfere, thanks to the package or module system.
Ah. I've been thinking that macros were globally applied.
That's essentially why I lost interest: None of what I was learning would enable me to actually work in any project, I'd have to add more time to even understand the libraries, let alone contribute.
That *really* doesn't sound one tiny bit different from trying to work on someone else's Java or Haskell code.
It *is* different: Understanding the libraries isn't hard in Java. That's partly because the language is pretty "stupid", though the addition of generics, annotations, and higher-order functions has started changing that. (Unfortunately these things are too important and useful to just leave them out.)
My own experience with Lisp was writing tens of thousands of lines to fit into hundreds of thousands, and my experience was very different from yours. Macros were *defined* sparingly, *documented* thoroughly, and *used* freely. Result: clarity.
Yes, I've been assuming that that's what was happening. I still reserve some scepticism about the "documented thoroughly" bit, because you're so far beyond any learning curve that I suspect that your chances of spotting any deficits in macro documentation are pretty slim. (I may be wrong, but I see no way to really validate the thoroughness of macro documentation.)
The other realization was that these extension mechanisms could make code non-interoperable,
I fail to see how define-syntax encapsulated in modules could possibly make Scheme code non-interoperable.
Yeah, I was assuming that macros are global. It's very old Lisp experience from the days when Common Lisp and Scheme were new fads, when Lisp machines were still a thing, and had to be rebooted on a daily basis to keep them running. What can make code non-interoperable even today is those multiple-dispatch mechanisms (which may not exist in Scheme but only in Common Lisp). Multiple dispatch cannot be made modular and consistent ("non-surprising"), and the MD mechanism that I studied went the other route: If the semantics is a problem, throw more mechanisms at it until people can make it work as intended, to the point that you could dynamically hook into the dispatch process itself. It made my toenails curl.
Java does a lot of things badly, but it got this one right. I can easily integrate whatever library I want, and it "just works", there are no linker errors, incompatible memory layouts, and what else is making the reuse external C++ libraries so hard.
Fell off chair laughing hysterically. Or was that screaming with remembered pain? I am sick to death of Java code *NOT* "just working". I am particularly unthrilled about Java code that *used* to work ceasing to work. I am also sick of working code that gets thousands of deprecation warnings.
Sorry, but that's all hallmarks of bad Java code.
I am particularly tired of having to grovel through thousands of pages of bad documentation, to the point where it's often less effort to write my own code.
Yeah, that used to be a thing. It isn't usually a problem anymore (even Hibernate may have grown up, I hear that the 4.x codebase is far better than the really crummy 3.6 one that I have come to disrespect).
I am *ALSO* sick of trying to match up Java libraries that only build with Ant and Java libraries that only build with Maven. (Yes, I know about just putting .jar files in the right place. I've also heard of the Easter Bunny.)
Using Ant means you're doing it in a pretty complicated and fragile, outdated way. Putting .jar files in the right place is the most fragile way ever, and leads stright into stone age nightmares; don't ever follow that kind of advice unless you *want* to fail in mysterious ways. Maven would be the way to go, but only if your project is so large that you have a team of build engineers anyway, i.e. with overall workforce of 30+ persons. Smaller teams should stick with Gradle, which uses the same dependency management as Maven but isn't into the kind of bondage&discipline that Maven is. Sadly, there are still shops that don't use Maven or Gradle. For legacy projects I can understand that, but many do it because they don't know better, i.e. there's no competent build engineer on the team. Those teams are doomed to repeat old mistakes, just like people who still think that Lisp macros are global are doomed to misjudge them :-D
Nowadays, the first step in any new project is to see what libraries we need. Integration is usually just a configuration line in the build tool (you don't have build tools in Lisp so that's a downside,
Wrong. There are build tools for Common Lisp and have been for a long time. I didn't actually know that myself until I saw a student -- who was using Common Lisp without any of us having ever mentioned it to him -- using one.
Ah ok, I didn't know that.
Look, it's really simple. If programmers *WANT* to write readable code and are *ALLOWED TIME* to write readable code, they will. Whatever the language.
Unless they want to show off how smart they are, and think that writing code that only they can understand is testament to that. This kind of thinking is frowned upon nowadays, but it used to be pretty widespread not too long ago.
If they have other priorities or constraints, they won't. Whatever the language.
Definitely. Even if programmers would and could to do better, external constraints can prevent them from doing so.

On Apr 19, 2017, at 05:42, Joachim Durchholz
wrote: Yeah, I was assuming that macros are global. It's very old Lisp experience from the days when Common Lisp and Scheme were new fads, when Lisp machines were still a thing, and had to be rebooted on a daily basis to keep them running.
Was Lisp ever a fad? I’m shocked to hear that. Seriously! I only got into Lisp after several years of mild nagging by one of my former employees, who studied Lisp and Proof Systems for his graduate CS Degree from U.Oregon, sometime back in the early 80’s, late 70’s. Turns out everything he said was true, and more. - DM

Around that time, I only first heard about commercial Lisp systems from Franz (VAX), HP (their own minicomputers), and Symbolics. It was around 1988 when I first saw a Lisp Machine, at a defense contractor (SAIC), after they showed me how defense spending under Pres. Reagan was in the exponential knee of increase. But the fad seemed to be hugely focused on VAX/VMS, not Lisp. Over in the commercial realm, the fad seemed to be with TRS-80, Atari, Mac (?), definitely IBM/PC’s, Pascal, then Object Pascal. Even C wasn’t getting much traction, and C++ was just a gleam in the envious C community’s eyes - largely a preprocessor of some sort. Smalltalk was a fascinating curiosity, and only tinkerers were playing with it. - DM
On Apr 19, 2017, at 05:56, David McClain
wrote: On Apr 19, 2017, at 05:42, Joachim Durchholz
mailto:jo@durchholz.org> wrote: Yeah, I was assuming that macros are global. It's very old Lisp experience from the days when Common Lisp and Scheme were new fads, when Lisp machines were still a thing, and had to be rebooted on a daily basis to keep them running.
Was Lisp ever a fad? I’m shocked to hear that. Seriously!
I only got into Lisp after several years of mild nagging by one of my former employees, who studied Lisp and Proof Systems for his graduate CS Degree from U.Oregon, sometime back in the early 80’s, late 70’s. Turns out everything he said was true, and more.
- DM _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Ah Yes… I do now remember the “Race for the 5th Generation”, us against Japan. Lisp against Prolog. That was a fad? Maybe in academic circles and the deep bowels of a handful of defense contractors. Meanwhile the rest of us were busily attaching chains of AP Array Processors on the back of VAX machines. Fortran still ruled the day. Nicholas Wirth had just invented Modula-2 to replace Pascal, as bleeding edge compiler technology. Wow, what memories you sparked... - DM
On Apr 19, 2017, at 06:08, David McClain
wrote: Around that time, I only first heard about commercial Lisp systems from Franz (VAX), HP (their own minicomputers), and Symbolics. It was around 1988 when I first saw a Lisp Machine, at a defense contractor (SAIC), after they showed me how defense spending under Pres. Reagan was in the exponential knee of increase. But the fad seemed to be hugely focused on VAX/VMS, not Lisp.
Over in the commercial realm, the fad seemed to be with TRS-80, Atari, Mac (?), definitely IBM/PC’s, Pascal, then Object Pascal. Even C wasn’t getting much traction, and C++ was just a gleam in the envious C community’s eyes - largely a preprocessor of some sort. Smalltalk was a fascinating curiosity, and only tinkerers were playing with it.
- DM
On Apr 19, 2017, at 05:56, David McClain
mailto:dbm@refined-audiometrics.com> wrote: On Apr 19, 2017, at 05:42, Joachim Durchholz
mailto:jo@durchholz.org> wrote: Yeah, I was assuming that macros are global. It's very old Lisp experience from the days when Common Lisp and Scheme were new fads, when Lisp machines were still a thing, and had to be rebooted on a daily basis to keep them running.
Was Lisp ever a fad? I’m shocked to hear that. Seriously!
I only got into Lisp after several years of mild nagging by one of my former employees, who studied Lisp and Proof Systems for his graduate CS Degree from U.Oregon, sometime back in the early 80’s, late 70’s. Turns out everything he said was true, and more.
- DM _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On 2017-04-19 15:14, David McClain wrote:
Ah Yes… I do now remember the “Race for the 5th Generation”, us against Japan. Lisp against Prolog. That was a fad?
I have read Haskell being described as "what M-expressions where supposed to be". Granted, it's not compiled down to S-expressions – but to the comparably simple Core, so the analogy has a bit more substance than the visible sentimentality. At the same time, in a way, our type system is a kind of Logic Programming language. We state facts about our programs as well as rules how to derive new facts, and the type checker tries to prove our system to be either inconsistent or incomplete. The syntax is not as simple as Prolog's, but that's partly due to the domain, partly because our type language is a grown one. In other words: After their feud, Lisp and Prolog must have made up and spent a hot few nights together, probably in a seedy conference room in Portland. Haskell is one of the results. Is it a perfect blend of its parents' strengths? Ah, hell no. But no child ever is. Which leads me to the question: Why the hell isn't there a sitcom about programing languages yet? Cheers, MarLinn

Am 19.04.2017 um 14:56 schrieb David McClain:
On Apr 19, 2017, at 05:42, Joachim Durchholz
mailto:jo@durchholz.org> wrote: Yeah, I was assuming that macros are global. It's very old Lisp experience from the days when Common Lisp and Scheme were new fads, when Lisp machines were still a thing, and had to be rebooted on a daily basis to keep them running.
Was Lisp ever a fad? I’m shocked to hear that. Seriously!
Common Lisp and Scheme were, in their first years. They both managed to transition from "fad" to "production-ready" AFAICT, probably after a stabilization period (I dimly recall having read versioned standardization documents). Lisp as such... probably. Whenever people with enough raw horsepower to use Lisp in practice met people who didn't, because then it would be a fad from the latter ones. Which pretty much meant everybody who didn't have access to abundant corporate-sponsored or state-sponsored hardware.

On 20/04/2017, at 1:52 AM, Joachim Durchholz
wrote: Common Lisp and Scheme were, in their first years. They both managed to transition from "fad" to "production-ready" AFAICT, probably after a stabilization period (I dimly recall having read versioned standardization documents).
Lisp as such... probably. Whenever people with enough raw horsepower to use Lisp in practice met people who didn't, because then it would be a fad from the latter ones. Which pretty much meant everybody who didn't have access to abundant corporate-sponsored or state-sponsored hardware.
Truly strange. You really did not need "horsepower" to use Lisp in practice. There were several Lisp implementations for CP/M, and CP/M machines were hardly "raw horsepower" by any stretch of the imagination. OK, a University lab of cheap 8086 PCs does technically count as state-sponsored, but hardly "raw horsepower". I suppose the price of TI PC-Scheme (USD95) would have put it out of reach of hobbyists /sarc. (I would definitely call PC-Scheme "production-ready" for its day.) The problem with Lisp was never garbage collection, or speed, or availability. It was *unfamiliarity*. As soon as things like TCL and Python became available, they were adopted gladly by many programmers, despite being less efficient than typical Lisps. To this day, Python forcibly discourages recursive programming by enforcing a small stack depth, even on machines with abundant memory, thus ensuring that Python keeps its reassuringly familiar imperative style. (Yep, ran into this the hard way, trying to implement a dynamic programming algorithm in Python.) I'm very sad to see FUD about Lisp surviving this long. Much the same FUD was spread about Prolog, despite there being decent Prolog implementations for cheap 16-bit machines. It is *bitterly* ironic to see Java adopted by people critical of Lisp. It just goes to show that the old hardware saying, "you can make a brick fly if you strap on a big enough jet engine" is true of programming languages.

Am 20.04.2017 um 02:51 schrieb Richard A. O'Keefe:
On 20/04/2017, at 1:52 AM, Joachim Durchholz
wrote: Common Lisp and Scheme were, in their first years. They both managed to transition from "fad" to "production-ready" AFAICT, probably after a stabilization period (I dimly recall having read versioned standardization documents).
Lisp as such... probably. Whenever people with enough raw horsepower to use Lisp in practice met people who didn't, because then it would be a fad from the latter ones. Which pretty much meant everybody who didn't have access to abundant corporate-sponsored or state-sponsored hardware.
Truly strange. You really did not need "horsepower" to use Lisp in practice. There were several Lisp implementations for CP/M, and CP/M machines were hardly "raw horsepower" by any stretch of the imagination.
If that had been universally true, nobody would have bothered to think about, let alone buy those Lisp machines.
OK, a University lab of cheap 8086 PCs does technically count as state-sponsored, but hardly "raw horsepower". I suppose the price of TI PC-Scheme (USD95) would have put it out of reach of hobbyists /sarc.
Nah, it simply wasn't available unless you knew it existed. (I certainly didn't know.)
The problem with Lisp was never garbage collection, or speed, or availability. It was *unfamiliarity*.
Nope. Actually, the first actual project I ever built was a (primitive) Lisp interpreter. I would have liked to work with it, but I lost access to the machine (that was in the pre-PC era), plus I didn't know enough to devise a way to integrate it into the surrounding operating system.
As soon as things like TCL and Python became available, they were adopted gladly by many programmers, despite being less efficient than typical Lisps.
I don't know what motivated anybody to ever use TCL. It's an abomination. Python doesn't count, it wasn't even thought about in the IBM PC days.
I'm very sad to see FUD about Lisp surviving this long.
Enough of this kind of slur. I'm not adhering to "FUD", I am reporting personal experience. I have been refraining from responding in kind, because some unfriendly things could be said about your mindset.
Much the same FUD was spread about Prolog, despite there being decent Prolog implementations for cheap 16-bit machines.
Sorry, Prolog simply didn't work. As soon as you tried to do anything that was prone to combinatoric explosion, you had to resort to cuts and essentially revert to a uselessly complicated imperative programming model just to keep that under control. I am well aware that Prolog can be an excellent tool, but only for tasks that it fits. Unfortunately, most paid jobs are too simple to actually need Prolog, and of the complicated ones, only a small fraction can even be adequately captured using Horn clauses. I'm not talking from theory here. I tried to adopt a simple distance calculation in a hexagonal lattice with it. I realized that to make it reversible I'd need have it do the inverse of finding the distance of two known points, namely find the list of distance-N points from a given centerpoint, which I liked because I needed that anyway; however, it turned out to be O(N^6) because it would explore ALL the paths inside the radius-N circle, and that was useless. Making it avoid that involved cuts, which would make the ruleset non-reversible, and hence non-composable under standard Prolog semantics (you'd be unable to add rules that would e.g. exempt blockaded points from the pathfinding). In other words, the resulting code would be an imperative program, shoehorned into a Prolog ruleset, giving me the worst of both worlds. No I'm fully aware that I might be able to find better solutions now, some three decades of experience later. But I don't think that Prolog's approach will scale to teams of average programmers, and I doubt you can find enough programmers to staff any project worth doing, unless your tasks are a natural fit to Prolog. (Of course there are people who do non-Horn-clause-ish things with Prolog, and successfully so. If all you have a hammer, then every problem starts looking like a nail.)
It is *bitterly* ironic to see Java adopted by people critical of Lisp.
I was forced to. I'd really love to live off Haskell programming. I'd even be happy doing Scala - it's far from perfect, but it integrates well with Java libraries, and that's a huge bonus that people without experience in a JVM language seem to be unable to appreciate; I suspect the Blub paradox at work here.
It just goes to show that the old hardware saying, "you can make a brick fly if you strap on a big enough jet engine" is true of programming languages.
Exactly.

On 20/04/2017, at 8:21 PM, Joachim Durchholz
wrote: Truly strange. You really did not need "horsepower" to use Lisp in practice. There were several Lisp implementations for CP/M, and CP/M machines were hardly "raw horsepower" by any stretch of the imagination.
If that had been universally true, nobody would have bothered to think about, let alone buy those Lisp machines.
You seem to be confusing the horsepower required to run *LISP* with the horsepower required to run *an IDE*. I never used an MIT-family Lisp machine, but I spent a couple of years using Xerox Lisp machines. Those machines had 16-bit (yes, 16-bit) memory buses (so to fetch one tag+pointer "word" of memory took three memory cycles: check page table, load half word, load other half), and into 4 MB of memory fitted - memory management (Lisp) - device drivers (Lisp) - network stack (Lisp) - (networked) file system (Lisp) - bit mapped graphics (Lisp; later models added colour) - structure editor (Lisp) - WYSIWIG text editor with 16-bit character set (Lisp) - compiler (Lisp) - automatic error correction DWIM (Lisp) - interactive cross reference MasterScope (Lisp) - dynamic profiling better than anything I've used since (Lisp) and -- this is what I was working on -- - 1 MB dedicated to Prolog, including Prolog compiler (Prolog) In terms of memory capacity and "horsepower", the D-machines were roughly comparable to an M68010-based Sun workstation. But the memory and "horsepower" weren't needed to run *Lisp*, they were needed to run all the *other* stuff. Interlisp-D was a very nice system to use. It was forked from Interlisp, which ran on machines with less than a megabyte of memory, but lacked the IDE, network stack, &c. I never had the pleasure of using an MIT-family Lisp machine, although I worked with two people who had, so I can say it was the same issue there. They did *everything* in an *integrated* way in Lisp. Complaining that those machines needed "horsepower" is like complaining that Eclipse needs serious horsepower (my word, doesn't it just) and blaming the C code you are editing in it. Heck, there was even a Scheme system for the Apple ][.
OK, a University lab of cheap 8086 PCs does technically count as state-sponsored, but hardly "raw horsepower". I suppose the price of TI PC-Scheme (USD95) would have put it out of reach of hobbyists /sarc.
Nah, it simply wasn't available unless you knew it existed. (I certainly didn't know.)
No, that's not what "available" means. It was offered for sale. It was advertised. There was a published book about how to use it. It really was not at all hard to find if you looked.
I'm very sad to see FUD about Lisp surviving this long.
Enough of this kind of slur.
When you say that Lisp was not used because it required serious "horsepower", you say what is not true. Lisp did not require serious "horsepower". Some of the *applications* that people wanted to write in Lisp required serious "horsepower", but they would have required such no matter what the programming language, and the relevance of Lisp was simply that it made such applications *thinkable* in a way that most other languages did not. As for Python, if we define "PC days" to include 1988, when there were about a dozen Lisps running on 286s, so yes, Python began in late 1989 and 1.0 was released in 2004, so you're right. But I never said Python was eagerly adopted in the "PC days".
I have been refraining from responding in kind, because some unfriendly things could be said about your mindset.
My mindset is that SML is better than Lisp and Haskell is better than SML and there are languages pushing even further than Haskell in interesting directions. I stopped counting how many programming languages I had tried when I reached 200. Things have changed a lot. We now have compilers like SMLtoJs http://www.smlserver.org/smltojs/ so that we can compile statically typed mostly-functional code into dynamically typed strange code so that we can run it in a browser. All the old complaints about Lisp, and we end up with JavaScript. But that's OK because it has curly braces. (:-( We have massive amounts of software in the world these days, and as far as I can tell, all of it is more or less broken. Read about the formal verification of seL4. It is downright scary how many bugs there can be in such a small chunk of code. And it's interesting that the people who did it now believe that formal verification can be *cheaper* than testing, for a given target error rate. The great thing about Lisp in the old days was that it could massively reduce the amount of code you had to write, thus reducing the number of errors you made. The great thing about Haskell is that it pushes that even further, and the type system helps to catch errors early. And QuickCheck! What a feature! There are other things, like PVS and Coq, which push verification at compile time further than Haskell (I never did manage to use the PVS verifier, but there's a great book about Coq), and if it turns out that bog standard programmers can't cope with that, then hiring superstandard programmers able to produce closer-to-verified code may be well worth the price. I often struggle to follow the things the brilliant people on this list do with Haskell's type system, but I very much appreciate their aim of producing correct code.

I'm not going to continue this, you keep insisting that I'm somehow either an idiot or acting maliciously, and neither is a role that I like to be in. I'm slightly pissed off, and that's not a good basis for continuing a constructive exchange, and there's too much risk that we're really mad at each other if we continue this. Regards, Jo

Am 18.04.2017 um 18:57 schrieb David McClain:
I think Lisp-style macros are too powerful. You can get excellent results as long as everybody involved knows all the relevant macros and their semantics perfectly well, and for one-man teams like yourself this can work very well. If you need to work with average programmers, this fails. In such an environment, you need to be able to read other peoples' code, and if they can define macros, you don't really know anymore what's happening.
Too powerful for whom?
Too powerful for teams that include sub-average programmers (by definition, you always have a sub-average one on the team). The issue is that you don't have enough brilliant developers to staff all the projects in the world, so you need technology that gives consistent results, much more than technology that boosts productivity but is inconsistent.
Again, I’m lucky to be master of my own domain.
Exactly.
Problem with that is that empowering the programmer makes it harder to be 100% sure what a given piece of code does. There might be macros involved, there might be metaprogramming involved, or some kind of multiple dispatch with nonobvious defaulting rules, or a gazillion of other things that you have to be aware of. It's not a problem if you know your system from the inside out, but it doesn't scale well to tasks that need to be done by a team.
Isn’t that always the case, and always will be the case?
Only in dynamic languages that do not give you any guarantees about what the code is *not* going to do. That's exactly what makes Haskell interesting: There cannot be any side effects unless the code has IO in the type signature, which is so awkward that people tend to push it into a small corner of the program. The net effect is that it is far easier to reason about what a program can and cannot do. In the Java/C++/whatever world, unit testing achieves the same, though the imperative nature and generally less well-delimited semantics the effect is far less pervasive. Java stands out a bit since the language semantics has always been nailed down pretty narrowly right off the start, which is a first in the computing world and has been having pervasive effects through all the language culture - Java libraries tend to interoperate much better than anything that I read about the C and C++ world.
Nobody can be certain how some library or application was written. You might have some pretty strong hints and hunches, but how can you be sure?
Seems like the best bet is to treat everything as untrusted black boxes.
Whenever I see one of the Java libraries do something unexpected in what I code, I tend to read the sources. It's usually easy because Java is so dumbed-down and verbose that you can pick up enough hints. The more advanced libraries (Spring) take longer to understand well enough, and some (Hibernate) are so badly coded that it's pretty non-fun, but I tend to get results, mostly because Java is so strict that I can see what's happening. I get into trouble whenever annotations or AOP stuff is involved, or when too much activity is hidden away into data structures that are being executed at a later time. Data types help me pick up on that lead though; I'd fail miserably in a language without static typing because all the hints and hunches are gone, and I have to go by cultural hints and hunches which tend to diverge between projects so I'd be lost. (Actually I'm pretty sure, I tried to get "into" Smalltalk once and it never worked for me.)
Erlang probably offers the best security model against dependencies.
Erlang is good for handling failures that show up. It does nothing for you if the problem is code that's subtly wrong.
If you look at any major element of code in production, it was probably written in a hurry under near unreasonable demands because hardware engineering overran their budgets and took some of yours, and the programmer may or may not have well understood his tools and language, or even if he did, he might have been daydreaming of being on the beach with his girlfriend instead of in this meat locker with the rest of you guys…
Actualy it's not the hardware engineers nowadays (we all work on stock hardware anyway). It's the bean counters who are trying to cut down on the budget.
My experience has been that hardware companies produce the most gawd-awful code you can imagine. Software shops vary… take M$ for instance - the bane of my existence. Looks like code gets pushed out the door and they rely on the audience to test it.
They used to be that way. Security threats endangered their business model, so they had to change; nowadays, they aren't that different form other large companies anymore.
Apple looks to me like a room full of really bright young minds, with no adult supervision.
Actually Apple is the most strictly controlled workplace in the industry, as far as I know.
I guess if I could offer any advice to other younger programmers, it would be to push yourself to become a polyglot to expand your mental horizons of what is possible, and a historian to understand what went before you and why, so that you don’t keep repeating the same stupid mistakes we did.
In the modern world, the bean counters found enough tools to control the development process that you don't have much of a choice. If you want to have choice, you now need to become a manager. Which means no programming at all. It takes a lot out of the profession that was attractive in the past decades, but on the other hand it reduces project unpredictability and increases job security. (It's still not perfect, but I doubt it ever can be.)
I could not be a production programmer. I’m definitely too much of a cowboy and a lone wolf. Teams are anathema to me. I need deep periods of isolation to think. People who manage to perform in production coding environments must have very different personalities from me.
That has been clear for a long while :-)

On 19/04/2017, at 7:08 PM, Joachim Durchholz
wrote: Too powerful for whom?
Too powerful for teams that include sub-average programmers (by definition, you always have a sub-average one on the team).
The issue is that you don't have enough brilliant developers to staff all the projects in the world, so you need technology that gives consistent results, much more than technology that boosts productivity but is inconsistent.
To keep this relevant to Haskell, Template Haskell is roughly analogous to Lisp macros. It works on syntactically well-formed fragments. It is normally hygienic, like Scheme, but permits quite complex transformations, like Lisp. And macros in Lisp and Scheme have traditionally been used to develop embedded DSLs, which is something Haskell programmers sometimes do. Template Haskell can of course do amazing things that Lisp and Scheme cannot. There are at least two claims here. - using (not developing, just using) macros requires "brilliant developers" - using (not developing, just using) macros "gives inconsistent results". I am willing to concede that *writing* Lisp macros requires particularly good programmers (because the 'implementation language' of Lisp macros has the full power of Lisp) and that *writing* Scheme macros requires particularly good programmers (because the 'implementation language' of Scheme macros *isn't* Scheme but tree-to-tree rewrite rules). In particular, I am not yet competent to write any Scheme macro whose implementation requires recursion, so since I'm a brilliant programmer, people who can do that must be geniuses, eh? It does not follow that *using* macros is any harder than using any special form pre-defined by the language, and as a matter of fact, it isn't. In Scheme, logical conjunction (AND) and disjunction (OR) are in principle defined as macros. That doesn't make them hard to use, error-prone to use, or in any way inconsistent. (And yes, I *have* struggled with macros in UCI Lisp and InterLisp. Modern Lisp is not your grandfather's Lisp any more than modern Scheme is your grandfather's Scheme or modern Fortran your grandfather's Fortran. And yep, there's a Fortran preprocessor, fpp, modelled on cpp but adapted to Fortran. Not standard, but freely available. And it hasn't caused the death of Fortran yet, any more than CPP has killed Haskell.)

Hello David, Thus quoth David McClain at 15:39 on Tue, Apr 18 2017:
1. Machine Fault failures on successfully compiled code. That’s the big nasty one. Provably correct code can’t fail, right? But it sometimes does.
A very small remark (mostly directed at novice Haskell users reading this): code which typechecks is _not_ the same thing as _provably correct_ code. Code which typechecks is code in which function applications concord with the types. Provably correct code is code which was (or can be) (automatically) proved to correspond to a certain exact specification. Haskell or OCaml type systems are usually not used for writing such exact specifications, because they are not expressive enough. Thus, in general, showing that Haskell or OCaml code typechecks is weaker than proving that it has a certain property (like not failing). (However, passing Haskell or OCaml type checks is still stronger than passing Java type checks.)
And I think you can trace that to the need to interface with the real world.
I have the same gut feeling. -- Sergiu

Am 18.04.2017 um 23:33 schrieb Sergiu Ivanov:
Haskell or OCaml type systems are usually not used for writing such exact specifications, because they are not expressive enough.
I have read statements that Haskell code still tends to "just work as expected and intended", and see that attributed to the type system. Seemed to be related to Haskells expressivity, with the type system pretty much preventing the typical errors that people make. So the sentiment was that it's not the same but close enough to be the same in practice. Though I doubt that that will hold up once you get more proficient in Haskell and start tackling really complicated things - but "simple stuff in Haskell" tends to get you farther than anybody would expect, so it's still a net win. Just impressions though, I never had an opportunity to delve that deeply into Haskell.
Thus, in general, showing that Haskell or OCaml code typechecks is weaker than proving that it has a certain property (like not failing).
That a code type checks in Haskell still gives you a whole lot of guarantees. Such as "it doesn't have IO in the type signature so there cannot be any side effect deep inside". (Ironically, people instantly started investigating ways to work around that. Still, the sort-of globals you can introduce via the State monad are still better-controlled than the globals in any other language.) (Aside note: I don't know enough about OCaml to talk about its properties, but I do believe that they are much weaker than in Haskell.)
(However, passing Haskell or OCaml type checks is still stronger than passing Java type checks.)
Yep.
And I think you can trace that to the need to interface with the real world.
I have the same gut feeling.
I think interfacing with the outside world (still not "real" but just bits inside the operating system and on disk) is one major source of unreliability, but not the only one, and not necessarily the primary one. YMMV.

Thus quoth Joachim Durchholz at 07:51 on Wed, Apr 19 2017:
Am 18.04.2017 um 23:33 schrieb Sergiu Ivanov:
Haskell or OCaml type systems are usually not used for writing such exact specifications, because they are not expressive enough.
I have read statements that Haskell code still tends to "just work as expected and intended", and see that attributed to the type system. [...] So the sentiment was that it's not the same but close enough to be the same in practice.
Sure, this "close enough" bit is one of the things which make Haskell very attractive. Now, it's not "close enough" for critical systems (satellites, nuclear reactors, etc.), from what I know.
Seemed to be related to Haskells expressivity, with the type system pretty much preventing the typical errors that people make.
I tend to see Haskell's type system as very restrictive and only allowing behaviour which composes well. It's also rich enough to allow specification of quite a wide variety of behaviour. However, there are a couple big "backdoors", like the IO monad. Typechecking gives zero guarantees for functions of type IO (), for example.
Though I doubt that that will hold up once you get more proficient in Haskell and start tackling really complicated things
To me, it really depends on the kind of complicated things. If you manage to explain the thing at the type level, then typechecking may give you pretty strong guarantees. I'm thinking of parsing and concurrency as positive examples and exception handling as a somewhat negative example.
but "simple stuff in Haskell" tends to get you farther than anybody would expect, so it's still a net win.
Absolutely.
Thus, in general, showing that Haskell or OCaml code typechecks is weaker than proving that it has a certain property (like not failing).
That a code type checks in Haskell still gives you a whole lot of guarantees. Such as "it doesn't have IO in the type signature so there cannot be any side effect deep inside".
Right, of course.
(Ironically, people instantly started investigating ways to work around that. Still, the sort-of globals you can introduce via the State monad are still better-controlled than the globals in any other language.)
In fact, I believe having pure functions does not so much target removing state as it does making the state _explicit_. Thus, the State monad is not a work-around to purity, it's a way of purely describing state. And then, if the State monad is too lax for one's purposes, one can use Applicative or even define more specific typeclasses.
(Aside note: I don't know enough about OCaml to talk about its properties, but I do believe that they are much weaker than in Haskell.)
That's what I tend to believe as well from looking over people's shoulders.
And I think you can trace that to the need to interface with the real world.
I have the same gut feeling.
I think interfacing with the outside world (still not "real" but just bits inside the operating system and on disk) is one major source of unreliability, but not the only one, and not necessarily the primary one. YMMV.
Yes, that unreliability comes with crossing the frontier between theoretical and applied computer sciences (that's an imprecise metaphor :-) ) -- Sergiu

Am 19.04.2017 um 15:13 schrieb Sergiu Ivanov:
Seemed to be related to Haskells expressivity, with the type system pretty much preventing the typical errors that people make.
I tend to see Haskell's type system as very restrictive and only allowing behaviour which composes well.
From a library writer's and library user's perspective, that would actually be a Good Thing.
However, there are a couple big "backdoors", like the IO monad.
Well, it's so awkward that people don't *want* to use it.
Typechecking gives zero guarantees for functions of type IO (), for example.
That doesn't match what I hear from elsewhere. I don't know what is the cause for that difference though.
Though I doubt that that will hold up once you get more proficient in Haskell and start tackling really complicated things
To me, it really depends on the kind of complicated things. If you manage to explain the thing at the type level, then typechecking may give you pretty strong guarantees. I'm thinking of parsing and concurrency as positive examples and exception handling as a somewhat negative example.
That matches what I heard :-)
(Ironically, people instantly started investigating ways to work around that. Still, the sort-of globals you can introduce via the State monad are still better-controlled than the globals in any other language.)
In fact, I believe having pure functions does not so much target removing state as it does making the state _explicit_.
Except State tends to make state implicit again, except for the fact that there *is* state (which might be actually enough, I don't have enough insight for any judgemental statements on the issue).
Thus, the State monad is not a work-around to purity, it's a way of purely describing state.
True.
And I think you can trace that to the need to interface with the real world.
I have the same gut feeling.
I think interfacing with the outside world (still not "real" but just bits inside the operating system and on disk) is one major source of unreliability, but not the only one, and not necessarily the primary one. YMMV.
Yes, that unreliability comes with crossing the frontier between theoretical and applied computer sciences (that's an imprecise metaphor :-) )
:-D

On 2017-04-19 15:58, Joachim Durchholz wrote:
Am 19.04.2017 um 15:13 schrieb Sergiu Ivanov:
However, there are a couple big "backdoors", like the IO monad.
Well, it's so awkward that people don't *want* to use it.
That's partly because IO isn't very well defined, and therefore used as a catch-all. It's like the "other mail" folder, a stuff-drawer in the kitchen or that one room in the basement: random things tend to accumulate and rot in there because they don't have a better place. One of the worst offenders from my perspective is (non-)determinism. Randomness can be seen as non-deterministic, but it doesn't need IO. At the same time, many things that are in IO are practically not less deterministic than pure functions with _|_ – once you have a good enough model. In other words, these things could be separated. It needs work, to model the real world, to go through all the stuff that's in that black box, to convince people to not use IO if they don't need to, to change the systems to even allow "alternative IOs"… so it's nothing that'll change soon. But my point is: The type system can only help you if there are precise definitions and rules, but IO is the "here be dragons" on our maps of the computational world – for one, because the open work I mentioned is one of the ways to explore that boundary between theoretical and applied computer sciences. IO is the elephant in the room of type-supported hopes of correctness. But then I agree with Joachim: it's also right in the center of the room where everyone can see, acknowledge, and avoid it. Cheers, MarLinn

Thus quoth MarLinn at 15:10 on Wed, Apr 19 2017:
On 2017-04-19 15:58, Joachim Durchholz wrote:
Am 19.04.2017 um 15:13 schrieb Sergiu Ivanov:
However, there are a couple big "backdoors", like the IO monad.
Well, it's so awkward that people don't *want* to use it.
That's partly because IO isn't very well defined, and therefore used as a catch-all. It's like the "other mail" folder, a stuff-drawer in the kitchen or that one room in the basement: random things tend to accumulate and rot in there because they don't have a better place.
Exactly.
One of the worst offenders from my perspective is (non-)determinism.
Oh, right, interesting!
Randomness can be seen as non-deterministic, but it doesn't need IO. At the same time, many things that are in IO are practically not less deterministic than pure functions with _|_ – once you have a good enough model. In other words, these things could be separated. It needs work, to model the real world, to go through all the stuff that's in that black box, to convince people to not use IO if they don't need to, to change the systems to even allow "alternative IOs"… so it's nothing that'll change soon.
I tend to see monads like STM (software transactional memory) and ST (strict state threads) as a kind of "add structure to IO" effort. Also, there's Eff (algebraic effects), but I still haven't had the time to read the seminal work beyond introduction: http://www.eff-lang.org/
IO is the elephant in the room of type-supported hopes of correctness.
I like this image :-) -- Sergiu

Thus quoth Joachim Durchholz at 13:58 on Wed, Apr 19 2017:
Am 19.04.2017 um 15:13 schrieb Sergiu Ivanov:
However, there are a couple big "backdoors", like the IO monad.
Well, it's so awkward that people don't *want* to use it.
Interesting point of view, I've never thought of the relative awkwardness of IO as of an educational measure.
Typechecking gives zero guarantees for functions of type IO (), for example.
That doesn't match what I hear from elsewhere. I don't know what is the cause for that difference though.
I'm probably shouting my "zero" too loudly. I wanted to say that, when the typechecker sees a function of type IO (), it only knows that it _may_ have side effects, but it cannot verify that these effects compose correctly with the effects coming from other functions in the IO monad. Of course, the return type IO () still gives a lot of information (e.g., it says the function is only useful for its side effects) and the type system will ensure that any computation actually using this function must be declared as (potentially) having side effects. Yet, you have no guarantees that such effects are composed correctly.
In fact, I believe having pure functions does not so much target removing state as it does making the state _explicit_.
Except State tends to make state implicit again, except for the fact that there *is* state (which might be actually enough, I don't have enough insight for any judgemental statements on the issue).
Well, a typical definition of State is parameterised in the type of the state, so you know what it is. Sure, a typical definition of State does not let you know whether and how the state was modified, but if one wants that information, one can "just" define a custom "StateLog" monad, for example, or even use Applicative to statically "record" the effects on the state (I hear that's one use of Applicative). -- Sergiu

Am 19.04.2017 um 17:56 schrieb Sergiu Ivanov:
I wanted to say that, when the typechecker sees a function of type IO (), it only knows that it _may_ have side effects, but it cannot verify that these effects compose correctly with the effects coming from other functions in the IO monad.
Ah right. Even Haskell's type system is not able to classify and characterize side effects properly. I don't think that that's a defect in the type system though; I've had my own experiences with trying to do that (based on design by contract, if anybody is interested), and I found it's pretty much uncontrollable unless you go for temporal logic, which is so hard to reason about that I do not think it's going to be useful to most programmers. (Temporal logic exists in several variants, so maybe I was just looking at the wrong one.)
In fact, I believe having pure functions does not so much target removing state as it does making the state _explicit_.
Except State tends to make state implicit again, except for the fact that there *is* state (which might be actually enough, I don't have enough insight for any judgemental statements on the issue).
Well, a typical definition of State is parameterised in the type of the state, so you know what it is.
Sure, a typical definition of State does not let you know whether and how the state was modified, but if one wants that information, one can "just" define a custom "StateLog" monad, for example, or even use Applicative to statically "record" the effects on the state (I hear that's one use of Applicative).
Thanks, that's going to guide me in future attempts at understanding all the strange and wondrous (and sometimes awesome) things you can find in the Haskell ecology. Regards, Jo

Thus quoth Joachim Durchholz at 18:39 on Wed, Apr 19 2017:
Am 19.04.2017 um 17:56 schrieb Sergiu Ivanov:
I wanted to say that, when the typechecker sees a function of type IO (), it only knows that it _may_ have side effects, but it cannot verify that these effects compose correctly with the effects coming from other functions in the IO monad.
Ah right. Even Haskell's type system is not able to classify and characterize side effects properly.
The language Eff seems to be built around an algebraic characterisation (according to the author, I haven't yet had the time to check) and handling of effects: http://www.eff-lang.org/ According to the page, the proposed formalism allows doing elegantly what in Haskell is typically done with monad transformers. I actually started thinking about porting the algebraic handling of effects to Haskell, which may be possible since its type system seems to be expressive enough, but my time is quite limited these days.
I don't think that that's a defect in the type system though; I've had my own experiences with trying to do that (based on design by contract, if anybody is interested), and I found it's pretty much uncontrollable unless you go for temporal logic, which is so hard to reason about that I do not think it's going to be useful to most programmers. (Temporal logic exists in several variants, so maybe I was just looking at the wrong one.)
Temporal logics kind of require guys with PhDs in your team :-) From what I know, the usual approach is designing (or adapting) a temporal logic for your specific task, since general properties are often undecidable. Your experience of handling effects seems interesting to me. -- Sergiu
participants (12)
-
Atrudyjane
-
Bardur Arantsson
-
David McClain
-
Donn Cave
-
Jack Hill
-
Jeffrey Brown
-
Joachim Durchholz
-
John Wiegley
-
MarLinn
-
MigMit
-
Richard A. O'Keefe
-
Sergiu Ivanov